id
stringlengths
10
10
title
stringlengths
26
192
abstract
stringlengths
172
1.92k
authors
stringlengths
7
591
published_date
stringlengths
20
20
link
stringlengths
33
33
markdown
stringlengths
269
344k
2302.12545
Hybrid machine-learned homogenization: Bayesian data mining and convolutional neural networks
Beyond the generally deployed features for microstructure property prediction this study aims to improve the machine learned prediction by developing novel feature descriptors. Therefore, Bayesian infused data mining is conducted to acquire samples containing characteristics inexplicable to the current feature set, and suitable feature descriptors to describe these characteristics are proposed. The iterative development of feature descriptors resulted in 37 novel features, being able to reduce the prediction error by roughly one third. To further improve the predictive model, convolutional neural networks (Conv Nets) are deployed to generate auxiliary features in a supervised machine learning manner. The Conv Nets were able to outperform the feature based approach. A key ingredient for that is a newly proposed data augmentation scheme and the development of so-called deep inception modules. A combination of the feature based approach and the convolutional neural network leads to a hybrid neural network: A parallel deployment of the both neural network archetypes in a single model achieved a relative rooted mean squared error below 1%, more than halving the error compared to prior models operating on the same data. The hybrid neural network was found powerful enough to be extended to predict variable material parameters, from a low to high phase contrast, while allowing for arbitrary microstructure geometry at the same time.
Julian Lißner, Felix Fritzen
2023-02-24T09:59:29Z
http://arxiv.org/abs/2302.12545v1
# Hybrid machine-learned homogenization: Bayesian data mining and convolutional neural networks ###### Abstract Beyond the generally deployed features for microstructure property prediction this study aims to improve the machine learned prediction by developing novel feature descriptors. Therefore, Bayesian infused data mining is conducted to acquire samples containing characteristics inexplicable to the current feature set, and suitable feature descriptors to describe these characteristics are proposed. The iterative development of feature descriptors resulted in 37 novel features, being able to reduce the prediction error by roughly one third. To further improve the predictive model, convolutional neural networks (Conv Nets) are deployed to generate auxiliary features in a supervised machine learning manner. The Conv Nets were able to outperform the feature based approach. A key ingredient for that is a newly proposed data augmentation scheme and the development of so-called deep inception modules. A combination of the feature based approach and the convolutional neural network leads to a hybrid neural network: A parallel deployment of the both neural network archetypes in a single model achieved a relative rooted mean squared error below 1%, more than halving the error compared to prior models operating on the same data. The hybrid neural network was found powerful enough to be extended to predict variable material parameters, from a low to high phase contrast, while allowing for arbitrary microstructure geometry at the same time. microstructure homogenization, convolutional neural networks, feature engineering, bayesian neural networks, machine learning ## 1 Introduction High performance materials are of great interest to the industry due to their capabilities and scope of application, e.g., in aerospace applications or for batteries, even though their development and manufacturing process is a highly challenging task. The tailoring of specific materials can be conducted by, e.g., tuning its microstructure to optimize its properties given specific requirements. The development process can be significantly boosted by replacing experimental tests through simulations, which are carried out by taking the microscopic geometric information of the materials into account [1; 2]. Some simulation methods are able to directly operate on the 3D image representation of the microstructure, e.g., obtained from a computed tomography (CT) scan, recently achieving improvements with respect to computational speed [3]. The high resolution of the image representation renders even such efficient methods infeasible to evaluate in a many query context by investigating numerous (e.g., in-silico generated) microstructured materials, when optimizing for specific material behaviour. Machine learning is a suitable tool to further reduce the computational cost for material development: It can discover complex relationships by studying the available data and replacing the costly simulations with computationally affordable operations. Various machine learning algorithms are regularly deployed in two actively researched fields of microstructure modeling, namely microstructure reconstruction and microstructure property prediction. In microstructure reconstruction/synthesis, the topology of an original microstructured image is adjusted to optimize for selected material behaviour under constraints [4, 5, 6], tailoring for specific microstructures to serve a particular purpose. The field of microstructure property prediction is often applied to a broader spectrum of microstructured materials [7, 8]. It targets the efficient and accurate prediction of the behaviour of inhomogeneous materials via, e.g., artificial neural networks (ANN). The present paper falls into a subclass of the latter category, specifically microstructure homogenization, which directly predicts effective material properties from given microstructural image data. Current state of the art methods often deploy a blend of different unsupervised and supervised machine learning methods, where the Principal Component Analysis (PCA) is used in combination of the 2-Point Correlation Function (2PCF) [9], using the principal scores as input for a supervised machine learned regressor to conduct the microstructure property linkage, ranging from polynomial regression [10] to artificial neural networks [7, 11]. Lately, the usage of convolutional neural networks (Conv Nets) has gained popularity in both outlined research fields [12, 13]. Conv Nets are used to directly predict the effective material response [14], or even in combination with the PCA where the reduced representation of the target values is used to predict stress strain relationships [15]. Conv Nets have the advantage over classical regressors in the sense that they are better suited to an extension for the prediction of full field solution in voxel (3D pixels) representation, since the data is often given in image representation, for the microstructured material as well as the full field response. The previous study of the authors [7] has deployed a method akin to the PCA and used the derived features in various state of the art machine learned regressors, finding that the accuracy of the approach is limited, which is confirmed in different studies, e.g., [8, 16]. Improvements beyond the 2PCF have been attempted by considering partial higher order correlation functions, e.g., [17]. However, the feature identification and computation becomes increasingly costly while yielding rather limited improvements with respect to accuracy. In the search for a better prediction, the idea of auxiliary features came to mind, which led us to data mining [18]. This constitutes the first block of this work (section 2.2): We systematically categorize the underlying data by using Bayesian neural networks [19]. We evaluate the aleatoric uncertainty which can hint at a lack of feature knowledge for certain samples. These samples can then be examined systematically in order to engineer additional features while being machine-guided during the feature engineering process. The thereby identified recurrent characteristics across multiple samples were then quantified by novel feature descriptors, while keeping computational efficiency and physical interpretability in consideration. The second block of this study considers Conv Nets (section 2.3), which are able to derive machine learned features via supervised learning. Further improvements with respect to Conv Nets are found by building upon the so called inception modules [20] and, therefrom, developing a _deep inception module_. The latter is explicitely designed to be able to capture features at different length scales within a single microstructural image. Ultimately, we combine the handcrafted, machine-guided features with features derived from Conv Nets into a _hybrid neural network_. The proposed model more than halved the prediction error compared to previous studies [7]. This hybrid model is further extended to predict the effective material properties for variable material parameters, allowing for variable phase contrast ranging from \(\frac{1}{2}\) up to \(\frac{1}{100}\) while considering variable microstructure material characteristics, e.g., with the volume fraction ranging from 20-80%. The data [21] used for development, training and validation purposes, as well as the python code [22] are made freely available. ## 2 Data and Methods ### Data overview In the machine learning focused manuscript, some emphasis has to be given to the data. This manuscript deals with the prediction of material properties from bi-phasic microstructured materials. The simplification of a Representative Volume Element (RVE) is introduced, where it is assumed that a single frame of the microstructure suffices to characterize the material behaviour of the macroscopic material [23] while using periodic boundary conditions. A compact overview of the data is given in Fig. 2, where some exemplary images of the microstructure are plotted on the left. These images, i.e., RVEs of the microstructured material serve as the input data to our algorithm and will be denoted RVE in the following. The target values of the machine learning algorithm are the components of the effective heat conduction tensor \(\frac{\bar{\kappa}}{\equiv}\). One Component of \(\frac{\bar{\kappa}}{\equiv}\) is plotted in Fig. 2 b), considering low conducting inclusions with a phase contrast of \(R=5\). A more detailed explanation of the data is given in the appendix A. Generally, the symmetric effective heat conduction tensor will be given in de-dimensionalized Mandel notation \[\bar{\underline{\mathbf{\bar{\mathbf{\varepsilon}}}}}=\begin{bmatrix}\bar{ \kappa}_{11}\\ \bar{\kappa}_{22}\\ \sqrt{2}\bar{\kappa}_{12}\end{bmatrix}\,. \tag{1}\] The objective of this paper is to find a machine learned model which is able to accurately predict the effective material properties of the presented microstructures, i.e. \[\bar{\underline{\mathbf{\bar{\mathbf{\varepsilon}}}}}=f(\text{RVE}) \tag{2}\] where the machine learned model \(f(\cdot)\) directly operates on the image data of the microstructured material (RVE) or uses features \(\underline{x}\) directly extracted from the image. The constraint to our algorithm is to solely rely on the given images of the RVEs, without any further information such as, e.g., the number or shape of inclusions. In this manuscript we first aim to find optimal features \(\underline{x}\) and a suitable artificial neural network \(f(\cdot)\) accurately mapping the complex image based relationship between features and the sought-after outputs. ### Bayesian Modeling via artificial neural networks #### Aleatoric and epistemic uncertainty In supervised machine learning the training and testing data is always given with deterministic target values such that error measures are computable. However, during inference, reliable error measures are generally not accessible. Consequently, having access to a measure of model Figure 1: The general outline of this paper is presented in figure format. On the left, the iterative approach of Bayesian assisted data mining is displayed, and the newly developed features thereof are used in a _hybrid neural network_ (on the right). The backbone of the hybrid neural network are _deep inception modules_ in parralel to a bypass using the volume fraction and the feature regressor. confidence indicating the likelihood for low or high prediction errors during inference is advantageous. Bayesian modeling approaches uncertainty quantification via _aleatoric_ and _epistemic_ uncertainties [24, 25]. The epistemic uncertainty aims to represent the uncertainty arising based on the lack of knowledge about the mapping from input to output values. In the machine learning context it arises from the model being unable to recover the original function, if it even exists. The aleatoric uncertainty arises from the uncertainty in the system or data and aims to represent noise in the input data, i.e., different output values for similar or even identical input values. Considering, for instance Fig. 2 b) from a machine learning viewpoint, the variation of the material response could be interpreted as noisy when considering only the volume fraction as input. These variations, however, are not due to noise, but arise due to different phenomena inexplicable when trying to predict the response exclusively by the materials volume fraction. Consequently, the aleatoric uncertainty can also be regarded as a measure of explainability in the data given the underlying (possibly incomplete) feature set. Since our main interest lies in the development of novel features based on the examination of data, i.e., data mining, we employ the aleatoric uncertainty and use it to detect samples which contain characteristics not explicable by the current feature set. The aleatoric uncertainty is modeled using the tensorflow probability distribution library [26] via artificial neural networks by assuming that the priors over the predicted data follow a normal distribution. Practically speaking, the aleatoric uncertainty is modeled by the prediction of the mean and standard deviation \(\hat{\mu}_{i},\hat{\sigma}_{i}\) which define a normal distribution of the \(i\)th output component instead of a single deterministic value. The neural networks predicted normal distribution is given as \[f(\underline{x})=\frac{1}{\sqrt{2\pi\hat{\sigma}_{i}^{2}}}\exp\Big{(}\frac{ \hat{\mu}_{i}-y_{i}}{2\hat{\sigma}_{i}^{2}}\Big{)}\,=p(y_{i}|\underline{x})\,, \tag{3}\] with the model \(f\) using the feature vector \(\underline{x}\) to predict the parameters \(\hat{\mu}_{i}\) and \(\hat{\sigma}_{i}\) for each component \(y_{i}\) independently. Thus, the neural network has to predict twice the number of output variables when modeling the aleatoric uncertainty as described. The quantification of the aleatoric uncertainty during training enters through the loss \(\Phi_{i}\) for target value \(y_{i}\), which is derived by minimizing the Kullback-Leibler divergence [25]: \[\Phi_{i}\big{(}y_{i},\mathcal{N}(\hat{\mu}_{i},\hat{\sigma}_{i}^{2})\big{)}=- \log\big{(}\frac{1}{2}p(y_{i}|\mathcal{N}(\hat{\mu}_{i},\hat{\sigma}_{i}^{2})+ s)\big{)}, \tag{4}\] where \(p(y|\mathcal{N}(\hat{\mu}_{i},\hat{\sigma}_{i}^{2}))\) is the probability of observing \(y_{i}\) given the current distribution \(\mathcal{N}(\hat{\mu}_{i},\hat{\sigma}_{i}^{2})\). Since the logarithm quickly tends to \(-\infty\) for very small values in \(p\left(y|\mathcal{N}(\hat{\mu}_{i},\hat{\sigma}_{i}^{2})\right)\), the shift parameter \(s>0\) has been introduced to stabilize the training. This greatly improved convergence behaviour of the model for \(s=0.25\). For a more detailed explanation of the theoretical background and derivation of Bayesian neural networks the authors refer to the literature, e.g., [19, 27], presenting a more generous outline to bayesian modeling and [28] who reviews different Figure 2: The investigated data is shown, where a few RVE images of the training set are shown on the left. The gray foreground represents the inclusion phase, and the black background represents the matrix phase. On the right, one component of the target values is shown for all samples out of the training set. modeling methods in uncertainty quantification. An example application using bayesian modeling is presented in [24]. #### Use of Bayesian neural networks in feature engineering As previously outlined, the aleatoric uncertainty can be used to indicate lack of feature knowledge that hinders more accurate predictions. For instance, the volume fraction alone is unable to consider particle shapes and their orientation distribution (Fig. 2 b)), which is reflected in the aleatoric uncertainty. Investigating these samples of high aleatoric uncertainty, it can be seen that each of these samples contains one or several characteristics inexplicable by the current feature set. Exploiting this information, our data mining approach is guided by the Bayesian neural network (BNN). First, we filter out a subset of our data by considering only samples of high aleatoric uncertainty. With this subset at hand, we aim at finding suitable feature descriptors to quantify the apparent characteristics. Since a major motivation for machine learning is about gaining computational efficiency, the selection of input features for the machine learned model should bear computational efficiency in mind. This motivates us to develop novel features which are either obtainable through computationally cheap operations, or derived via the convolution operation. The effect of the discrete convolution operation will be briefly motivated by considering the 1D Sobel operator \(\underline{\underline{\mbox{\it k}}}^{\mathrm{S}}\) \[\underline{\underline{\mbox{\it k}}}^{\mathrm{S}}=\frac{1}{2}\left[-1,\;0,\; 1\right] \tag{5}\] which is applied to an arbitrary 1D signal \(\underline{\underline{\mbox{\it s}}}\in\mathbb{R}^{n_{x}}\) via a discrete convolution: for any admissible \(i\), i.e., \(1<i<n_{x}\) \[\begin{array}{rcl}\underline{\underline{\mbox{\it f}}}=\underline{\underline{ \mbox{\it s}}}*\underline{\underline{\mbox{\it k}}}^{\mathrm{S}}\to f_{i}&=& \sum\limits_{j=-1}^{1}s_{i+j}\ k_{j+2}^{\mathrm{S}}\\ &=&\frac{1}{2}(s_{i+1}-s_{i-1})\,\end{array} \tag{6}\] where the gradient information is obtained in the feature map \(\underline{\underline{\mbox{\it f}}}\) (c.f. central difference). Since the kernel can be arbitrarily chosen, if one was to replace the Sobel operator \(\underline{\mbox{\it k}}^{S}\) with a normalized constant vector, one would recover the result of the moving average in the feature map \(\underline{\underline{\mbox{\it f}}}\). Analogously, the discrete convolution of a 2d signal, i.e., of an image, is conducted by adding a second dimension to the data and therefore a second summation to the operation (equation (6)). A graphical overview of the convolution operation is given a little later in section 2.3 in the introduction of convolutional neural networks. Since the convolution is evaluated at each admissible index position with a double summation, the operation in itself becomes costly. The cost can be drastically reduced by conducting the convolution in Fourier space when following the convolutional theorem [29], i.e. \[\underline{\underline{\mbox{\it F}}}=\underline{\underline{\mbox{\it A}}}* \underline{\underline{\mbox{\it k}}}=\mathcal{F}^{-1}\big{(}\,\mathcal{F}( \underline{\underline{\mbox{\it A}}})\cdot\mathcal{F}(\underline{\underline{ \mbox{\it k}}})\big{)}\,. \tag{7}\] The discrete Fourier Transform induces periodicity, which is rather favorable in our case where the microstructure, i.e., the RVE, and the fields are periodic. Similar to the motivation in 1d is the resulting feature map \(\underline{\underline{\mbox{\it F}}}\) depends on the kernel \(\underline{\underline{\mbox{\it k}}}\), even when applying it to the same image \(\underline{\underline{\mbox{\it A}}}\). This makes the convolution operation exceedingly flexible for the quantification of characteristics. For instance, it could even be used to detect specific shapes by interpreting the feature map as a match indicator of the kernel in the image (Fig. 3). Thus, we develop feature descriptors by developing specific kernels and let the BNN guide us in the process. Figure 3: The convolution of one RVE with two different kernels is shown. The superscripts S, R of \(\underline{\underline{\mbox{\it k}}}\) represent the _Sobel_ filter and the searched _rectangle_, respectively. The feature map \(\underline{\underline{\mbox{\it F}}}\) to the right differs significantly when applying different kernels to the same image. We suggest the following general strategy: the entire set of currently available features is used to train a BNN. Thereafter, an additional data set, i.e., a test set consisting of unseen data, is predicted and the subset of samples of high prediction error and high aleatoric uncertainty (Fig. 5 inside the red box) are investigated to identify common phenomena by the data scientist/engineer. An efficient feature descriptor to quantify the patterns is proposed. The new features are subsequently added to the existing feature set, which are used to train a novel BNN. Therewith, the procedure was repeated and samples of high aleatoric uncertainty were investigated, postulating an iterative feature engineering approach. This is graphically illustrated in Fig. 1 (left), and can be transferred to any application using human interpretable data. The initial BNN was trained using the reduced coefficients of the two point correlation function (2PCF), as described in [7]. In order to isolate samples of high aleatoric uncertainty and high prediction error, we consider a relative error measure \[\|e_{\mathrm{rel}}\|_{2}=\frac{\|\bar{\kappa}-\hat{\kappa}\|_{2}}{\|\bar{ \kappa}\|_{2}} \tag{8}\] with the target value \(\bar{\kappa}\) and the prediction \(\hat{\kappa}\leftarrow f(\underline{\mathrm{RVE}})\), which was taken as the mean \(\hat{\mu}_{i}\) from the predicted normal distribution. The error-uncertainty relationship in the first iteration is displayed in Fig. 5, where the entire test set was predicted. A few samples in the approximate region of the top right corner of Fig. 5 are shown in Fig. 4. After close investigation of these samples, it was found that many of these samples of high aleatoric uncertainty share a diagonal structure, more strikingly do they contain connected regions or even percolation in certain directions. This led to the invention of the band features (Fig. 7), which aim to detect linear connectivity. The band features have some resemblance with the lineal path function [30]. The lineal path function has been previously adopted in multiple studies, e.g., being used as a feature [31], or as a descriptor to generate statistically similar RVEs [32, 33]. The lineal path function computes the probability that a line segment of specific length under a certain angle lies fully within the inclusion phase. It is quite costly to compute for multiple angles and different lengths of the line segment. The newly proposed band features are computed with a single convolution per direction \(d\) as \[p_{d}^{\mathrm{I}}=\max(\underline{b}_{d}*\underline{\mathrm{RVE}}) \tag{9}\] where \(\underline{b}_{d}\) denotes the detection band in the indexed direction \(d\). Note that each detector \(\underline{b}_{d}\) is normalized such that \(p_{d}^{\mathrm{I}}\in[0,1]\). The maximum operation in (9) virtually fixes the evaluation of the band feature on a single spot in the RVE, i.e., the location where the band feature detector is within the inclusion phase for the highest amount. In addition to the linear connectivity found in the inclusion phase (Fig. 4), the absence of inclusions in the respective direction is captured Figure 4: Iteration 1: A few samples which led to the highest relative prediction error and highest aleatoric uncertainties are shown. These samples were found by a model trained using only the reduced coefficients and were used for the first iteration of feature development. Figure 5: The relative prediction errors related to the aleatoric uncertainty are shown for the models prediction of the test set in the first iteration. Samples close to the top right corner (located inside the red box) are displayed in Fig. 4 and used for feature engineering by \[\begin{array}{l}\widehat{p}_{d}^{\mathrm{M}}\ =1-\min(\underline{b}_{d}*\underline{ \mathrm{RVE}})\\ \ =\max\left(\underline{b}_{d}*(1-\underline{\mathrm{RVE}})\right),\end{array} \tag{10}\] where \(1-\underline{\mathrm{RVE}}\) is a phase inversion of the image in the bi-phasic setting. Generally, \((p_{d}^{\mathrm{I}}-1)\neq p_{d}^{\mathrm{M}}\), due to the minimum/maximum operation. The band feature detectors introduce hyperparameters, one being the directions to be considered with the band feature detectors (see Fig. 7 a) ), and one being the width of the band feature detector, which controls the _minimum connection width_. In this study, we chose the width of the band feature detectors to be 4px, and the angle increments of the band feature detectors are set to be \(\frac{\pi}{8}\) for both phases. _Remark:_ An extension of the approach to multiphase materials can be gained by using the band features on the individual phase indicator functions. The novel band feature coefficients have been used to enrich the existing feature set, and a new model was trained. After convergence, the model is once again used to predict the test set and samples of high aleatoric uncertainty are displayed in Fig. 6. It can be seen that the samples leading to high aleatoric uncertainty now often contain dispersed inclusions which are spread over the whole RVE. It can also be seen that the found inclusions are comparatively small with some overlap. If multiple particles are connected this often leads to non-convex inclusion clusters. Consequently, we seek features that are able to characterize inclusion dispersal and inclusion shape. Physically motivated the next feature descriptor aims to quantify global flux hindrance, i.e., if there are multiple inclusions which form a disconnected barrier in a certain direction. This measure is computed by reducing the 2d-image to a 1d-line and noting if there is at least one voxel containing the inclusion phase. An example is illustrated in Fig. 8, where the full line (on the top left) indicates the _disconnected barrier_. Taking the average of the line, we obtain a scalar valued feature for each direction. This is implemented in x- and y-direction via (given in python pseudocode) \[w_{i}=\mathrm{mean}\Big{(}\ \mathrm{sum}(\ \underline{\mathrm{RVE}},\ \mathrm{axis =}i)\geq 1\Big{)} \tag{11}\] where \(\geq 1\) checks if there is at least 1 pixel found and the sum operator is conducted in axis/dimension \(i\), following the syntax of numpy.sum. To quantify inclusion dispersal and approximate size of inclusions the RVE was subdivided into a grid of local regions/cells. The local volume fraction \(c_{IJ}\) (\(I,J\) in the coarse grid) was computed, coinciding with _average pooling_[34] (cf. section 2.3). The local volume fraction on the coarse grid yields a spatial distribution of the relative amount of inclusion phase, see Fig. 10 left. Figure 8: Two artificial microstructures are shown with their respective effective heat conductivity to the right. On the top/left the respective reduced line projections are graphically given. The RVEs only vary in the horizontal position of half the inclusions but differ noticeably in their effective response. Figure 6: Iteration 2: A few samples which led to the highest relative prediction error and highest aleatoric uncertainties are shown. These samples were found by a model trained with the reduced coefficients and the band feature coeffcients. These images are studied in the second iteration of feature development. Figure 7: Each line on the left in a) denotes one scanned direction of the band feature detectors. The determination of the band feature in vertical direction is shown for two images in b) and c), where the full lines measures the inclusion phase and the dashed line measures the matrix phase. The feature detector \(\underline{b}_{d}\) in b) and c) are drawn in their approximate position for feature determination. Taking the mean of \(\underline{c}\) the global volume fraction is obtained, i.e., average pooling is volume preserving. New insights can be gained by taking the standard deviation \(\sigma^{f}(\underline{c})\) and skewness \(\widetilde{\mu}_{3}^{f}(\underline{c})\) of the distribution, which approximately represents the size of inclusion clusters. Additionally, the number of cells containing a certain volume fraction were counted. Firstly, cells containing only one phase were counted, i.e., cells with the local volume fraction of 0 or 1. The remaining cells were further subdivided into thirds, such that the cells were counted as \[\underline{c}=\frac{1}{n_{\text{cells}}}\left\{\begin{array}{l@{\quad}l}\# \big{(}\quad&f_{\text{cell}}\,<\quad\epsilon\quad\big{)}\\ \#\big{(}\quad\epsilon\,\leq f_{\text{cell}}\,<\,\frac{1}{3}-\epsilon\,\, \big{)}\\ \#\big{(}\,\,\frac{1}{3}-\epsilon\,\leq f_{\text{cell}}\,<\,\frac{2}{3}- \epsilon\,\,\big{)}\\ \#\big{(}\,\,\frac{1}{3}-\epsilon\,\leq f_{\text{cell}}\,<\,1-\epsilon\,\, \big{)}\\ \#\big{(}\,\,1-\epsilon\,\leq f_{\text{cell}}\,\,\big{)}\\ \end{array}\right., \tag{12}\] where \(\#(\bullet)\) denotes the operation counting the number of cells fulfilling the condition, \(f_{\text{cell}}\) being the local cell volume fraction (\(c_{IJ}\)) and \(\epsilon\leq\frac{1}{n_{x}\cdot n_{y}}\) a numerical tolerance. Here one more hyperparameter is introduced, namely the window size of the average pooling (we chose \(\frac{1}{8}n_{x}\times\frac{1}{8}n_{y}\) in this study), which determines the size of the coarse grain voxels. In addition to that one could also consider a different partitioning for the counting operation where we fixed it to be divided into equal thirds. Another feature descriptor is proposed to further quantify the size and sphericity of the inclusions through the surface information, since the ratio of \(\frac{\text{inclusion\,area}}{\text{inclusion\, perimeter}}\) assist in the estimation. The ratio grows with inclusion size and is generally larger for circles than rectangles. The surface information can be found via a convolution, where edge detectors like the Sobel operator [35] are able to detect the (directional) surfaces of inclusions. The feature map of edges \(\underline{\underline{E}}^{e}\) can be computed as \[\underline{\underline{E}}^{e}=|\underline{k}^{e}*\underline{\text{RVE}}|, \qquad e\in\{1,2,3,4\} \tag{13}\] for different edge detectors \(\underline{k}^{e}\). The chosen edge detectors are the horizontal, vertical and a diagonal Sobel filters, i.e., \[\underline{\underline{k}}^{1} =\left[-1\,\,0\,\,1\right]\,, \underline{\underline{k}}^{2} =\left[\begin{matrix}0&-0.5&-1\\ 0.5&0&-0.5\\ 1&0.5&0\end{matrix}\right]\,, \tag{14}\] \[\underline{\underline{k}}^{3} =\left[\begin{matrix}-1&-0.5&0\\ -0.5&0&0.5\\ 0&0.5&1\end{matrix}\right]\,, \underline{\underline{k}}^{4} =\left[\begin{matrix}-1\\ 0\\ 1\end{matrix}\right]\,.\] The absolute value is taken in (13) since the orientation of the surface normal is irrelevant for our investigations. Similar to the local volume fraction the edge feature maps are processed with average pooling, leading to a coarse grid representation \(\underline{\underline{S}}^{e}\). The mean \(\mu^{E}(\underline{\underline{S}}^{e})\), standard deviation \(\sigma^{E}(\underline{\underline{S}}^{e})\) and skewness \(\widetilde{\mu}_{3}^{E}(\underline{\underline{S}}^{e})\) is taken, leading to three additional features per edge detector. The same window size as for the local volume fraction \(\underline{c}\) was used, which constitutes one hyperparameter for this feature. Additionally, other edge detectors, e.g., the Laplacian, could be considered. With the auxiliary features at hand a new BNN model was trained for the next iteration, with all of the previously used and newly introduced features. Once again the RVE samples of high aleatoric uncertainty and high prediction errors are shown in Fig. 9. As can be seen in most of the samples, clusters of inclusions are formed with narrow connections, leading often to non-convex edges. Similarly, inclusions do almost form a cluster with a few pixels distance in between Figure 10: The process of extracting the distribution of volume fraction/edges is shown above, reducing the \(400\times 400\) image data to a \(8\times 8\) distribution with the chosen parameters. On the left the volume fraction distribution is shown, and on the right the edge distribution for \(k^{2}\) is shown. Figure 9: Iteration 3: A selection of samples which led to the high relative prediction error and high aleatoric uncertainties are shown. These samples were found by a model trained using all the previously mentioned features. them. No efficient method to quantify these phenomena was found, and other features which have been tested did not notably improve the models prediction. Thus, the feature engineering was halted at the third iteration of feature engineering. In total 37 new features have been proposed by utilizing four different feature descriptors, which are compactly summarized in tab. 1. Each of the features is either obtainable via convolution, or obtainable by computationally efficient operations. ### Convolutional neural networks Convolutional neural networks (Conv Nets) [36] are derived from feed forward neural networks [7, 37] to make the classification/regression of high-resolution images tractable via weight sharing [38]. The evaluation of Conv Nets is similar to dense feed forward neural networks and conducted in one forward pass. The matrix multiplication used in dense feed forward neural networks is replaced by a convolution \[\underline{\underline{A}}_{l+1}=f_{l}(\underline{\underline{W}}_{l}* \underline{\underline{A}}_{l}+b_{l})\,, \tag{15}\] with \(\underline{\underline{A}}_{l+1}\) being the image output of layer \(l\), \(\underline{\underline{W}}_{l}\) denoting the kernel, \(b_{l}\) the element wise added bias/offset, and \(f_{l}\) the activation function. Here the kernel \(\underline{\underline{W}}_{l}\) and the bias \(b_{l}\) are the trainable parameters of the neural network. One major advantage of Conv Nets over dense neural networks is that the small kernel (e.g. \(3\times 3\) is a popular choice) is used to scan over the entire image, processing the entire information with only few parameters. To further improve information processing, multiple _channels_ per layer can be specified (illustrated through multiple slices in Fig. 11), where each channel represents a different feature map (of same resolution). This leads to kernel matrices being of size \(m_{y}\times m_{x}\times n_{\text{in}}\times n_{\text{out}}\), having different kernel weights for each input-output channel combination. Thus, each output channel of the current layer is obtained by processing all input channels with independent kernels, which results are generally averaged. To keep the computational overhead feasible, downscaling of the image resolution is conducted as more channels are added, which is generally achieved via _stride_ or _pooling_. The stride is implemented through the convolution, denoting the stepping width of kernel evaluation interval, i.e., if stride=2 then the kernel gets evaluated \begin{table} \begin{tabular}{|l|c|c|c|c|} \hline feature name & number of features & number of hyperparameters & captured phenomena & related equation \\ \hline band features & 16 & 2 & linear phase connectivity & (9), (10) \\ global directional mean & 2 & 0 & global flux hindrance & (11) \\ volume fraction distribution & 7 & 2 & inclusion size and dispersal & (12) \\ directional edge distribution & 12 & 1 & inclusion shape and size & (13),(14) \\ \hline \end{tabular} \end{table} Table 1: The proposed features, are summarized in the table. The described phenomena of each feature as well as number of features and hyperparameters is shown. Note that when changing some hyperparameters the number of features may also change. Figure 11: A schematic overview of a vanilla convolutional neural networks is shown. The supposed feature extraction is conducted in the convolutional layers (displayed as the light blue rectangles), where each rectangle symbolizes one channel. The last convolutional layer is followed by a flattening operation and a subsequent dense feed forward neural network (displayed with the green neurons). Figure 12: A convolution using a \(3\times 3\) kernel (displayed in the middle) with stride 2 applied to a \(7\times 7\) image (left) is shown. The resulting image of size \(3\times 3\) after convolution is given to the right. Each colored box denotes the discrete positions of the kernel for the convolution operation, matched on input and output image. on every other position, and a downscaling of approximately factor two is achieved along each dimension (Fig. 12). A _pooling_ layer replaces the convolution operation with e.g. the maximum or average operation (Fig. 13), achieving downscaling analogously through the stride. In a general case the stride in pooling is set equal to the size of the pooling _kernel_. Interpreting the different pooling operations one can see that max pooling singles out the peak activation of the kernel in the local neighbourhood, whereas average pooling reflects the average occurrence of the feature in a local region. After a pooling layer, the number of channels is kept constant meaning that each channel is pooled individually. Another special operation in Conv Nets are \(1\times 1\) convolutions, which are used to control/reduce the number of channels while generally retaining the spatial resolution (with a stride\(=1\)). Additionally, a nonlinear activation function can be deployed after the \(1\times 1\) convolution. With these operations at hand the convolutional layers can be built, where the schematic overview of a Conv Net in Fig. 11 has four convolutional layers, where each layer except the first one has a stride\(>\)1 and an increasing number of output channels per layer. After the last convolutional layer, the derived features in the _spatial image representation_ are flattened into a 1d-vector and a dense feed forward neural network is deployed to conduct regression/classification. Hence, the convolutional channels are often interpreted as feature extractors, whereas the dense neural network is utilized for the prediction based on the features extracted by the convolutional layers. Such neural networks will be referred to as _generic_ Conv Nets below. ### Padding in Conv Nets The convolutional kernels derive features based on neighbourhood information and consequently, on the boundary of the image where the neighbourhood is undefined, a loss of information is introduced. To enable the Conv Net to correctly consider the full neighbourhood relationship, periodic _padding_ before each convolution operation was deployed. This has been implemented in tensorflow similar to [39] and is made publicly available in [22], where also the implementation for the data augmentation scheme is found. For illustration a periodically padded image is shown in Fig. 14, where it can be seen that the padding ensures the spatial resolution to stay constant after the convolution operation, since the kernel evaluation is only defined within the red box of original spatial resolution. Note that in a deep Conv Net periodic padding is deployed in each layer. ### Data augmentation Data augmentation aims at increasing the size of the training set, which inherently has a regularizing effect [40]. One study has successfully applied Figure 14: The original RVE (inside red box) is padded by half the kernel size. The padding is highlighted in orange. The blue rectangles show the considered region while evaluating the convolution with a kernel at the dot position for two exemplary convolution locations. Figure 13: Average and max pooling is exemplarily displayed for an arbitrary \(9\times 9\) image on the left, where the discrete values after the operation are shown in the respective colors. The red rectangles highlight the spatial field considered for each output value. On the right \(10\times 10\) average and max pooling was applied for an exemplary microstructure image. The pooling ’grid’ is shown on the image in red, where each grid corresponds to one output pixel. cutout, which goes as far as to simply gray out larger regions of the input image during training [41]. We aim to virtually increase the size of the training set by translating the frame of the RVE, which does not alter the macroscopic material due to assertion of periodicity (Fig. 15). This additionally assists the Conv Net in learning the translational invariance of the RVE frames, and can generate up to \(400^{2}\) (i.e. image resolution) snapshots/samples of the input image per RVE. For memory considerations, this is implemented 'online' during training, where 50% of the RVE frames are randomly translated every 10-th epoch. ### Deep inception module In the context of microstructure homogenization, the material behaviour is impacted by various factors, e.g., locations with narrow gaps between inclusions, as well as large inclusion clusters. In the special case of percolation, it is relevant that one inclusion cluster stretches over the entire RVE, often in a curved manner, which also has to be detected in order to make accurate predictions of the materials behaviour. Thus, different characteristics with different relative size in the image have to be detected by the convolutional layers. In order to detect these various features, we deploy parallel branches of differently sized kernels. Thereby, we design the Conv Net to capture differently sized characteristics. Something similar, namely the inception module has been previously implemented by [20]. Their original intention was to reduce computational overhead and memory usage by increasing width instead of depth in Conv Nets. Their resulting model, which only used inception modules (Fig. 16), achieved state of the art results. The inception modules were implemented by replacing the convolution operation between two layers by multiple parallel convolutions of differently sized kernels, where each of the convolution operations has padding and a stride of 1, since the feature maps are concatenated channel wise at the end of the module. To reduce the spatial resolution in the deep Conv Net, [20] used pooling between multiple inception modules. Building upon the idea of multiple parallel convolutions, we increase the depth in each parallel convolution _branch_, and propose the _deep inception modules_ (Fig. 17). The constraint of stride\(=1\) in the convolutional layers is dropped, and the number of operations in each branch can also be flexibly adjusted. Then, we design each branch to capture different sized phenomena, e.g., by a preceding average pooling/coarse graining*, we can easily increase the receptive field of the first \(5\times 5\) convolution to be \(25\times 25\) pixels (Fig. 17 rightmost branch). Additionally, we ensure that there exist also branches which deploy convolution operations directly using the raw image information, to capture small sized effects within the RVE (Fig. 17 leftmost branch). Footnote *: Note that large average pooling introduces only a minor loss in information, since edge information is mostly retained when using float values (c.f. Fig. 13) The only remaining constraint in the deep inception modules is that the downsampling factor through stride/pooling has to match in each branch if the channels are concatenated after the convolution operation. In Fig. 17, each branch Figure 16: The original inception module is graphically illustrated where multiple convolution operations are conducted between the layers. The graphic layout is copied from [20] fig.2 and slightly modified. Figure 15: The blue box highlights the originally generated RVE which characterizes the macroscopic material by a repeated continuation in each direction. The green box outlines a completely different image of the same RVE obtained from translation in the periodic medium. individually achieves a downsampling of a factor of 40, and the channels of each branch are concatenated at the end. ### The hybrid neural network So far two different model archetypes, i.e., the dense feed forward neural network (FFNN) using the handcrafted features and the convolutional neural network (Conv Net) are introduced to predict the homogenized material property. Both model archetypes are able to outperform the reference model [7] and we aim to obtain further improvements by combining the FFNN and Conv Net in parallel. To additionally support the prediction, a bypass using only the volume fraction, which is the single most impactful variable in homogenization, has been implemented to serve as a baseline prediction. The resulting hybrid neural network is shown in Fig. 18, which consists of three different contributions to the models prediction. The models prediction is given as \[\tilde{\underline{\kappa}}=\tilde{\underline{\kappa}}_{\text{vol}}+\Delta( \tilde{\underline{\kappa}}_{\text{features}}+\tilde{\underline{\kappa}}_{ \text{ConvNet}})\,, \tag{16}\] where each predicted subpart \(\hat{\underline{\kappa}}_{\bullet}\) denotes one subbranch of the hybrid neural network. The idea is that each branch considers increasingly high level features in order to predict the effective heat conductivity, which is governed by complex geometrical effects. To assist the models convergence, a multistage training is implemented. The entire scheme is graphically illustrated in Fig. 19. Firstly, branch (I) using the volume fraction is trained independently until convergence (stage 1). After convergence the parameters of (I) are frozen, but the branch will contribute to all subsequent predictions. As a next step, the entire model is trained for only a few epochs (stage 2), to move the model in the proximity of a local minimum. Since the handcrafted features (section 2.2.2) remain static during training, branch (II) using these features is trained until convergence (Stage 3), which trainable parameters are frozen thereafter. In the next step the last branch (III) containing the Conv Net is trained until convergence (stage 4), while the feature branch (II) contributes to the prediction. The motivation is that the Conv Net can capture high level features which explain the remaining variations uncaptured by the handcrafted features. In the final step, the interdependent parameters are finetuned by once again training the entire model until convergence (except for branch (I)). ## 3 Results During the process of finding the best model, i.e., the hybrid neural network, a multitude of neural networks has been trained in the step by step procedure. The results of each intermediate step are compactly summarized in tab. 2 and visualized via a R\({}^{2}\) plot in the appendix C3. Further details on the different prediction contributions, i.e., the handcrafted features and the convolutional neural networks (Conv Nets) are given separately. In general, for the different network archetypes, different losses have been deployed. Regarding model parameter optimization, the same state of the art optimizers have been used across the different models. This consists of ADAM gradient back propagation with weight decay [42], early stopping and a constant learning rate with default hyperparameters. The model training has been implemented using the tensorflow API [43], and Figure 17: One deep inception module is shown where multiple deeper convolution branches are conducted in parallel. Each kernel is quadratic and the dimensions are read as size/stride \(\cdot n_{\text{channels}}\). the code for training and post processing can be found in [22]. All models have been trained using the same data outlined in section 2.1 (elaborated in appendix A). Validation was done using a benchmark dataset of completely unseen inclusion shapes whilst a mix of inclusions within a single RVE is allowed (Fig. 20). As an intuitively interpretable metric we use the relative error measure: \[e=100\cdot\sqrt{\frac{1}{n_{\mathrm{samples}}\cdot n_{\kappa}}\sum_{i=1}^{n_{ \mathrm{samples}}}\sum_{j=1}^{n_{\kappa}}\frac{(y_{ij}-\hat{y}_{ij})^{2}}{y_{ ij}^{2}}}\,, \tag{17}\] which coincides with the error measure introduced in (8) and will be denoted with rel. \(\sqrt{\mathrm{MSE}}\) in the future. Here \(y_{ij}\) denotes the target values, \(\hat{y}_{ij}\) denotes the models prediction for all samples \(i\) and all components \(j\) (of the heat conduction tensor). ### Feature engineering and selection During the feature engineering approach, in each iteration step a new Bayesian neural network (BNN) was trained using the same architecture and the Bayesian loss described in (4). The architecture is given in the appendix B1. The features which have been added in each iteration step were motivated and described in section 2.2.2. The actual improvements per step are given in tab. 2 and are graphically shown in Fig. 22. In Fig. 22, the density blur highlights the locations where most of the model predictions coincide, and it can be nicely seen that the center of it, i.e., the brightest spot, shifts closer to the origin in each iteration. It can also be seen that the mean prediction error improves whilst the average predicted uncertainty also decreases. Note that in the first iteration the coefficients of the 2-point correlation function (2PCF) has been used [7], in the second iteration the band features were added, and in the third iteration the remainder of the features was added, consisting of the volume fraction distribution, the directional edge distribution and the global directional mean (c.f tab. 1). Figure 19: The multiple stages of the hybrid models training are shown. If a model subpart is transparent in a stage, then gradients were not computed thereof. The line connections represent which part of the model contributed to the prediction in the current stage. Each stage except for stage 2 is trained to convergence. Figure 18: The developed _hybrid neural network_ utilizes parts of a generic dense feed forward neural networks as well as parts of a Conv net with deep inception modules. The prediction of the effective property is obtained by summing up the prediction of each sub-part of the model. As has been previously stated, any tests on additional features features yielded little to no improvements. Since the improvements from the second to the third iteration were smaller than in the first iteration, a feature selection process was initiated to spot if the introduced features are redundant or do not positively contribute to the prediction. The correlation matrix in Fig. 21 shows the absolute values of the Pearson correlation scores, indicates that the features are suitably uncorrelated, though some correlations remain within each feature class. In a further investigation step, the features have been ranked by two filter and wrapper methods. Filter methods estimate feature importance on plain data observation, and wrapper methods rate the importance based on an intermediate surrogate model which measures _scores_ of prediction contribution. The deployed filter methods used the Pearson correlation scores and the analysis of variance (ANOVA) via F-scores [44]. The deployed wrapper methods were the recursive feature elimination (RFE)[45] and a method deploying random forests [46]. Each of these methods found a slightly different order, presented as indices, given in arrays: \[\begin{array}{l}\text{Pearson ranking:}\\ \left[\begin{array}{ features are color coded to display agreement between the feature selection methods. Features marked by dark blue have been ranked as the top/bottom 13 features by all methods, light blue by three, green by two, and orange by one. The utmost curious reader may compare the indices represented above with the indices and corresponding feature displayed in the correlation matrix (Fig. 21). Overall, the volume fraction and most of the band features were rated as the most significant for all methods, and generally some edge distribution features and reduced coefficients of the 2PCF were commonly rated with the lowest scores. In a final step, multiple BNNs (with the same architecture tab. 1) have been trained using only a subset of the available features. The BNNs used [6, 9, 12,..., 51] features in each step and were trained in a 'best out of 5' setup, meaning that five BNN have been trained for each feature subset ordered by each feature selection method. The achieved validation losses are given in Fig. 23. Generally, there is very little deviation seen between the different feature selection methods, and more strikingly, the loss decreases almost monotonously with respect to the number of features. Thus, all 51 features have been chosen for the subsequent hybrid neural network since all of the proposed features positively contribute to the prediction. ### Convolutional and hybrid neural networks For the convolutional neural networks (Conv Net) the aleatoric uncertainty was dropped and only one deterministic value was predicted. The deployed loss for model optimization and calibration was the mean squared error (MSE). At first, a generic Conv Net was deployed to predict the effective heat conductivity using the available image data. The full architecture of the Conv Net is given in appendix B2. This model underperformed previous state of the art results at first, however, it has been continuously improved to outperform even the feature based approach. Since the volume fraction is the most relevant parameter in homogenization, the plain Conv Net was improved by implementing the bypass using the volume fraction (equivalent to the hybrid model in section 2.5), such that the Conv Net was forced to dedicate its attention in detecting high level features which quantify phenomena explaining variations around the volume fraction. The Figure 23: The validation loss development with respect to the total number of features is shown. Each scatter shows the lowest validation loss with the best BNN out of five for the different feature selection methods. Figure 22: A density plot is shown which relates the aleatoric uncertainty to the rel. \(\sqrt{\mathrm{MSE}}\) in each iteration step, overlayed with a scatterplot of the actual predictions. Samples further away from the brightest spot in the density plot are displayed larger for better visibility. The ’x’ marks the mean values. Note that the abscissa of the left most plot is differently scaled. next improvement was found by implementing the deep inception modules. The deep inception modules were both applied to the input image, and their output was flattened and concatenated into one dense regressor. The full architecture of each intermediate neueral network is given in appendix B2. The attentive reader might have noted that the loss of the model deploying deep inception modules on the benchmark set in tab. 2 is slightly larger than for the generic Conv Net, however the model using deep inception modules achieved a low training loss of \(1.7\cdot 10^{-5}\), outperforming all previous models by a factor of 4. Thus, to fully capitalize on the deep inception modules, the dataset was enriched through the proposed data augmentation scheme, by randomly translating 50% of the input images every 10-th epoch during training. The data augmentation scheme has also been deployed for the generic Conv Net, however leading to an accurate but strongly over-fitting model, giving prohibitive prediction errors for few samples. This can be best seen in the discrepancy between the mean and median error on the test and benchmark set in tab. 2. Further improvements were only found by the implementation of the hybrid neural network, adding only very few additional parameters to the \begin{table} \begin{tabular}{|l|c|c|c|c|c|c|} \hline \multirow{2}{*}{Model type} & \multirow{2}{*}{mean relative error [\%]} & \multicolumn{2}{c|}{real.} & \multicolumn{1}{c|}{MSE} & \multicolumn{1}{c|}{trainable} \\ & & \(\bar{\kappa}_{11}\) & \(\bar{\kappa}_{22}\) & \(\bar{\kappa}_{11}\) & \(\bar{\kappa}_{22}\) & \multicolumn{1}{c|}{[\%]} & \multicolumn{1}{c|}{[-]} \\ \hline \hline volume fraction only & 6.99 (5.60) & 6.86 (5.81) & 5.89 (4.14) & 5.54 (4.24) & 8.91 (7.47) & 144.12 (91.2) & 241 \\ reference model [7] & 1.92 (1.62) & 1.91 (1.63) & 1.37 (1.28) & 1.30 (1.24) & 2.39 (1.99) & 10.40 (6.47) & 241 \\ \hline feature iteration (1) & 1.40 (1.15) & 1.13 (1.19) & 1.04 (0.91) & 0.83 (0.96) & 1.90 (1.72) & 6.58 (4.84) & 4 997 \\ feature iteration (2) & 1.26 (0.1) & 1.20 (1.02) & 1.00 (0.81) & 0.91 (0.79) & 1.71 (1.38) & 5.33 (3.11) & 4 997 \\ \hline generic Conv Net & 1.87 (1.91) & 1.81 (1.85) & 1.46 (1.55) & 1.41 (1.49) & 2.37 (2.29) & 10.21 (8.55) & 220 331 \\ generic Conv Net & 1.70 (1.56) & 1.56 (1.46) & 1.21 (1.28) & 1.14 (1.21) & 4.91 (1.98) & 43.79 (6.43) & 220 359 \\ + vol bypass & 2.01 (1.90) & 2.11 (1.84) & 1.49 (1.51) & 1.65 (1.47) & 2.68 (2.43) & 13.07 (9.65) & 278 831 \\ generic Conv Net & & & & & & \\ + vol bypass & \(10^{8}\) (0.88) & \(10^{7}\) (0.90) & 0.65 (0.72) & 0.70 (0.71) & \(10^{8}\) (1.23) & \(10^{17}\) (2.45) & 220 359 \\ +data augmentation & & & & & & \\ inception Conv Net & & & & & & \\ + vol bypass & 0.74 (0.77) & 0.79 (0.80) & 0.58 (0.62) & 0.62 (0.63) & 1.05 (1.04) & 2.00 (1.77) & 278 831 \\ +data augmentation & & & & & & \\ + vol bypass & 0.74 (0.77) & 0.79 (0.80) & 0.58 (0.62) & 0.62 (0.63) & 1.05 (1.04) & 2.00 (1.77) & 278 831 \\ +data augmentation & & & & & & \\ \hline hybrid model & 0.73 (0.68) & 0.74 (0.71) & 0.55 (0.54) & 0.56 (0.56) & 1.03 (0.93) & 1.91 (1.42) & 283 703 \\ \hline \hline \end{tabular} \end{table} Table 2: Error measures are given for each step in the improvement process of the artificial neural networks. The full numbers present the error measures computed on the benchmark set and the values given in the brackets refer to the error measures computed on a test set affine to the training data. The different model types are separated via horizontal lines. Figure 24: The plots display the target values and the respective hybrid neural network prediction for some randomly selected samples and the samples with the highest prediction error for each component of the heat conduction tensor. Each plot shares the same legend displayed in the middle. The bar plots at the bottom are computed using the absolute error and show the error quantiles on the current volume fraction interval. Note that the error bars consider the entire dataset, even the samples which were not plotted in favor of a less cluttered and better readable plot. inception Conv Net. As discussed in section 2.5, the hybrid model did not always converge to a good local minimum, however, after implementing the multistage training, it reliably found an excellent minimum (Fig. 25). Note that for almost every model the validation loss is lower than the training loss in Fig. 25, which is explicable by the data augmentation scheme. A more detailed error analysis of the hybrid model is given in figure format in Fig. 24. The plot shows only a few sample predictions but it is ensures that the samples with the highest prediction errors are shown. As can be seen, the model is incredibly accurate, and is even able to accurately predict outliers of the offdiagonal component \(\bar{\kappa}_{12}\). ### Physical correctness In order to further quantify the accuracy of the hybrid neural network, a brief study regarding physical correctness is conducted. Starting out with the first property of frame translation invariance, which is not enforced but the data augmentation scheme is of assistance. Note that the previously mentioned manually derived features are designed to be frame translational invariant, with the minor exception of the volume fraction/directional edge distribution, which fluctuates ever so slightly based on the location of the pooling grid. Some errors of the hybrid neural network with respect to frame translation are seen in Fig. 26, where it can be seen that there is no configuration of the RVE in which the model is significantly worse than in any other. The model does not yield the same prediction for every possible configuration, however, there is only slight variations between the predictions of the different configurations. Additionally, a different geometric transform can be used to generate auxiliary samples, i.e., by rotating the RVE frames (Fig. 27). After rotation, the distinct values of the heat conduction tensor \(\bar{\underline{\kappa}}\) are swapped and do not change, except for the sign in \(\bar{\kappa}_{12}\). Ideally, this property should also be learned by the hybrid model, which was, similar to the translational invariance, not the case up to minor fluctuations (tab. 3). There seems to be an ever so slight bias in favor of the prediction of \(\bar{\kappa}_{11}\), which implies a minor bias in the training data. The model seems to perform slightly better for the original configuration of the RVE, however, this effect is negligibly small. Note that the rotation of the RVE frames could also be implemented for data augmentation, however the features would have to be recomputed on every augmentation step since they are, correctly so, not rotational invariant. The hybrid model with data augmentation is able to correctly reflect physical behaviour up to an acceptable prediction error. #### 3.2.1 Variable phase contrast To further test the capabilities of the potent hybrid model, it was trained to predict the thermal behaviour for variable phase contrast ranging from \(R=2\) to \(R=100\), while considering insulating inclusions (previously, R=5 was fixed). During training, the input data of the model did not change compared to the previously discussed data, however, the output data of the model changed, where the effective heat conductivity of all training samples got computed at the discrete values of the phase contrast \(R\in\{2,5,10,20,50,100\}\). In order to inform the hybrid model of the variable phase contrast, one extra input neuron was added in each branch of the model (Fig. 18 the red neuron), leading to less than 500 additional parameters. The phase contrast input parameter was linearly scaled from 0 to 1. One further adjustment was made, i.e., instead of the MSE the rel. \(\sqrt{\mathrm{MSE}}\) was used as a cost function. The resulting prediction errors on the benchmark set are compactly summarized in tab. 4, and presented more elaborately in the appendix in Fig. 14. Figure 25: The achieved validation (full lines) and training loss (dashed lines) is shown after model convergence. The hybrid neural network is compared with and without pretraining for five randomly initialized trainings, as well as the Conv Net using deep inception modules. There it can also be seen that the prediction becomes increasingly challenging for higher phase contrasts, which the model was still able to predict with a moderately low rel. \(\sqrt{\text{MSE}}\) of \(12\%\) for the highest phase contrast. The models accuracy for \(R=5\) did slightly deteriorate compared to the previously shown hybrid model, however, in the tradeoff for generalization capabilities, where the model is able to predict every phase contrast with low prediction errors. One major advantage of training the different phase contrasts in a single model is the interpolation capabilities, where the prediction errors are given in Fig. 28 for \(R\in{1,2,3,...,100}\). The plot shows that the smooth interpolation is in general accurately given for almost all phase contrasts within the training range with a minor exception for the short interval of \(R\approx[60,72]\), where the prediction errors are slightly larger. ## 4 Summary The prediction of the homogenized response is improved by novel features which are developed with Bayesian assisted data mining. The aleatoric uncertainty is used to collect samples which contain characteristics inexplicable to the current feature set, commonalities within these samples were found and multiple feature descriptors were developed, efficiently quantifying these characteristics. In addition to the manually engineered features convolutional neural networks are deployed, where we propose deep inception modules which are a priori designed to capture phenomena at different length scales within the microstructural image data. The major advantage of the deep inception modules appear to be the improved generalization \begin{table} \begin{tabular}{|l|c|c|c c|c c|} \hline \begin{tabular}{c} RVE image \\ configuration \\ \end{tabular} & \multicolumn{2}{c|}{mean relative error [\%]} & \multicolumn{2}{c|}{median relative error [\%]} & \multicolumn{2}{c|}{ \begin{tabular}{c} relative \\ MSE [\(\cdot\)] \\ \end{tabular} } & \multicolumn{2}{c|}{\(\begin{bmatrix}\text{MSE}\\ 10^{-5}\) [\(\cdot\)] \\ \end{tabular} } \\ & \(\bar{\kappa}_{11}\) & \(\bar{\kappa}_{22}\) & \(\bar{\kappa}_{11}\) & \(\bar{\kappa}_{22}\) & & \\ \hline original & 0.730 & 0.742 & 0.552 & 0.562 & 1.027 & 1.915 \\ \hline rotated & 0.759 & 0.751 & 0.564 & 0.574 & 1.045 & 1.982 \\ \hline \end{tabular} \end{table} Table 3: The error measures have been computed for the hybrid neural network once in the original configuration of the RVE and once in the rotated configuration for the entire benchmark dataset. In the rotated configuration the components \(\bar{\kappa}_{11}\) and \(\bar{\kappa}_{22}\) are flipped and the off-diagonal component \(\bar{\kappa}_{12}\) changes its sign. Figure 27: The prediction of two RVEs out of the benchmark dataset is given for the original representation of the RVE as well as the rotated configuration. The models prediction \(\underline{\hat{\kappa}}\) as well as the actual target values \(\underline{\hat{\kappa}}\) are shown below each RVE. capabilities and its regularizing effect. To utilize the deep inception modules to their full potential, a data augmentation scheme is presented, which can generate more than 100 000 input samples per data point, without increasing memory consumption. The two different neural network archetypes utilizing the newly engineered features and deep inception modules are combined into a hybrid neural network, deploying the archetypes in parallel to yield its prediction. To improve convergence behaviour, a multistage training is implemented. The resulting model was able to more than half the prediction error compared to previous state of the models. After extending the model to predict variable phase contrast, the proposed hybrid neural network was able to accurately predict the material response for the challenging data. It performs just slightly worse on single phase contrast compared to the model trained only on the respective data, even without touching the network layout. Remarkably, the variable contrast model is still majorly outperforming the reference model [7] by almost halving the prediction error, while enabling for completely arbitrary inputs. The model does also perform reasonably well on interpolation, yielding accurate predictions even for phase contrasts the model has not been trained on. ## 5 Discussion One of the key motivations of machine learning is its efficient evaluation during inference. The overhead of computing all the features as well as the evaluation of the convolutional neural network is noticeable. Since all of the features, including the reduced coefficients, are computable in Fourier space or trivially obtainable, the FFT has to be conducted only once for each sample. Additionally, the convolutional kernels can be pre-computed and stored in Fourier representation when the resolution is fixed. One computational downside for the band features is that in each direction the IFFT has to be taken, since the maximum and the minimum in real space are required. Similarly to the edge distribution where the absolute value is required with its pixel location. Consequently, these features are only obtainable with an additional computational overhead. When timing the \begin{table} \begin{tabular}{|c|c c|c c c|c c|} \hline Phase & mean relative error [\%] & \multicolumn{2}{c|}{mean absolute error [-]} & \multicolumn{2}{c|}{relative} & \multicolumn{1}{c|}{MSE} \\ contrast & \multicolumn{2}{c|}{\(\bar{\kappa}_{\bullet}>0.2\)} & \multicolumn{2}{c|}{\(\bar{\kappa}_{\bullet}\leq 0.2\)} & \multicolumn{2}{c|}{\(\cdot 10^{-5}[-]\)} \\ & \(\bar{\kappa}_{11}\) & \(\bar{\kappa}_{22}\) & \(\bar{\kappa}_{11}\) & \(\bar{\kappa}_{22}\) & \(\bar{\kappa}_{12}\) & \\ \hline \hline 2 & 0.61 & 0.59 & - & - & 0.0021 & 0.70 & 2.37 \\ 5 & 1.01 & 0.97 & - & - & 0.0043 & 1.33 & 3.60 \\ 10 & 2.07 & 2.01 & 0.0048 & 0.0053 & 0.0065 & 2.74 & 9.56 \\ 20 & 2.78 & 2.69 & 0.0087 & 0.0111 & 0.0090 & 4.71 & 18.98 \\ 50 & 3.17 & 3.36 & 0.0139 & 0.0159 & 0.0121 & 8.40 & 34.37 \\ 100 & 3.46 & 3.74 & 0.0159 & 0.0190 & 0.0134 & 12.16 & 44.50 \\ \hline \end{tabular} \end{table} Table 4: Averaged error measure over the entire benchmark dataset are shown for the model being able to predict variable phase contrast. Depending on the target value of \(\bar{\kappa}_{\bullet}\), a relative or absolute error measure is computed, values denoted with - are not defined (c.f.Fig. 14). The error measures are given at the discrete sampled phase contrasts the model was trained on. Figure 28: The prediction for variable phase contrast is given for the full interpolation range of R=[2,100] using \(\Delta R=1\). To the left the averaged prediction over the entire set is give. The right three plots, which share the same legend, show the average prediction in each component, as well as the prediction of two distinct samples of the highest and lowest relative error. The predictions are always given in full lines and the reference solution in dashed lines. The phase contrasts, which the hybrid neural network has seen during training, are highlighted by dots. feature computation as well as the prediction, the model outperformed the intrinsically fast Fourier accelerated solvers by a factor of \(\approx 10\), where the feature derivation made up about 80% of the computational effort by taking \(\approx 77\) seconds when considering 1500 samples, whilst the hybrid models prediction took \(\approx 9\) seconds. Note that the prediction can be evaluated for multiple phase contrasts once the input feature vector is obtained. Further computational speedups can be obtained with additional hyperparameter tuning of the deep inception modules. However, the obtained improvement is already significant, such that additional hyperparameter tuning has been omitted in favor of more future research, especially when considering the required time investment to explore the actual infinite range of possible variations (inception module depth, width, tuning of each branch, etc.), and the possibility of nested deep inception modules. Regarding the prediction accuracy, the hybrid neural network which uses the manually engineered features as well as the deep inception modules was able to significantly outperform different models. Ultimately, it has failed to learn exact physical behaviour, which is not too surprising since the prediction is solely based of a neural network without any constraints. AcknowledgmentsFunded by Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy - EXC 2075 - 390740016. Contributions by Felix Fritzen are funded by Deutsche Forschungsgemeinschaft (DFG, German Research FR2702/8 - 406068690 and DFG-FR2702/10 - 517847245. We acknowledge the support by the Stuttgart Center for Simulation Science (SimTech). ## Declarations * EXC 2075 - 390740016. Contributions by Felix Fritzen are funded by Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) within the Heisenberg program DFG-FR2702/8 - 406068690 and DFG-FR2702/10 - 517847245. We acknowledge the support by the Stuttgart Center for Simulation Science (SimTech). * Conflict of interest/Competing interests The authors declare no conflict of interest * Availability of data and materials The data is publicly available in the data repository of the University of Stuttgart (DaRUS) [21] * Code availability The code will be made publicly available in [22] upon publication. ## Appendix A Data overview The binary image data \(A\) is represented via voxels as a discrete field, such that we have \[A\in\mathbb{A}^{n\times n},\quad A_{ij}=A(\mathbf{x}_{ij}),\quad\mathbf{x}_{ij}\in \Omega, \tag{11}\] where the value \(A_{ij}\) denote pixel-wise constant values and the data range \(\mathbb{A}\subset\mathbb{N}_{0}\) depends on the number of discrete phases. In general, \(A\) represents image data, more specifically the image data representing the RVE. The analyzed microstructure images are given by binary voxel data, i.e., \(\mathbb{A}=\{0,1\}\), and the resolution is fixed to \(400\times 400\), i.e., \(i,j\in[0,1,\ldots,399]\) (using python notation). The binary color space \(\mathbb{A}=\{0,1\}\) is displayed as black in the matrix phase of the material (value 0), and the light gray foreground represents the inclusions (value 1), as can be seen in Fig. 2. The material is exemplary for metal-ceramic or polymer-glass composite materials. This study focuses on the thermal behaviour of the material, i.e., on the effective heat conductivity2. The properties of the macroscopic material are induced by the parameters of the phases in the RVE, where a phase contrast of \(R=5\) is considered such that we have Footnote 2: Equivalently, this can be interpreted as a prediction of the permeability of the material which has the same underlying mathematical structure. \[\kappa_{\rm M}=1\,\frac{\rm J}{\rm sKm},\quad\quad\quad\kappa_{\rm I}=\frac{ \kappa_{\rm M}}{R}\,, \tag{12}\] i.e., low-conducting inclusions are considered. Here the subscript M denotes the matrix phase, and the subscript I denotes the inclusion phase. Simulations of the heat flow conducted via FANS [47] yield the symmetric effective heat conduction tensor \(\tilde{\underline{\tilde{\kappa}}}\) as \[\tilde{\underline{\tilde{\kappa}}}=\begin{bmatrix}\tilde{\kappa}_{11}&\tilde{ \kappa}_{12}\\ \tilde{\kappa}_{12}&\tilde{\kappa}_{22}\end{bmatrix}\,, \tag{11}\] in the 2D setting. Due to the symmetry the target values are represented in Mandel notation \[\tilde{\underline{\tilde{\kappa}}}=\begin{bmatrix}\tilde{\kappa}_{11}\\ \tilde{\kappa}_{22}\\ \sqrt{2}\tilde{\kappa}_{12}\end{bmatrix}\,. \tag{12}\] To obtain de-dimensionalized output values the heat conduction tensor is normalized by the material property of the matrix phase, i.e., \[\tilde{\underline{\tilde{\kappa}}}=\frac{\tilde{\underline{\tilde{\kappa}}}}{ \kappa_{\text{M}}}\,, \tag{13}\] again, given in Mandel notation. The investigated data, generated by our in-house algorithms, is made publicly available [21]. It hosts all of the data used for the methodological developments. Considering the training data for the neural networks, the input data of is shifted and scaled such that each input feature of the neural network has zero mean and unit standard deviation in order to ensure statistical equivalence for all features. The transformation of samples \(X_{i},i=1,\dots,n\) into \(\widetilde{X}_{i}\) of a feature \(X\) is computed via \[\overline{X}=\frac{1}{n}\sum_{j=1}^{n}X_{j},\;\;\;\widetilde{X}_{i}=\frac{ \sqrt{n-1}(X_{i}-\overline{X})}{\sqrt{\sum_{j=1}^{n}(X_{j}- \overline{X})^{2}}}\,. \tag{14}\] This process is performed for individually for each feature, i.e., zero cross-correlation of the inputs is asserted for simplicity. Regarding the target values \(\bar{\underline{\kappa}}\): the diagonal components in the heat conduction tensor are well defined in \(\tilde{\kappa}_{ii}\in(0,1)\) and the off-diagonal component \(\tilde{\kappa}_{ij}\) fluctuates around \(0\) with relatively small values due to the positive definiteness of the tensor (c.f. Fig. 11). The physically possible range of output values is consequently machine learning friendly and, therefore, no additional scaling is performed. The complete dataset contains 30000 samples, however, only 3000 samples were used for training and validation purposes. The development of the methods were supported by using an auxiliary test set containing 1500 unseen samples. Additionally, a new set of microstructure images containing a mix of circular and rectangular inclusions within a single RVE and, further, structures composed of ellipsoidal inclusions are used as benchmark, constituting truly unseen data. ## Appendix B Neural network architectures ## Appendix C Full error plots
2303.03761
Graph Neural Networks in Vision-Language Image Understanding: A Survey
2D image understanding is a complex problem within computer vision, but it holds the key to providing human-level scene comprehension. It goes further than identifying the objects in an image, and instead, it attempts to understand the scene. Solutions to this problem form the underpinning of a range of tasks, including image captioning, visual question answering (VQA), and image retrieval. Graphs provide a natural way to represent the relational arrangement between objects in an image, and thus, in recent years graph neural networks (GNNs) have become a standard component of many 2D image understanding pipelines, becoming a core architectural component, especially in the VQA group of tasks. In this survey, we review this rapidly evolving field and we provide a taxonomy of graph types used in 2D image understanding approaches, a comprehensive list of the GNN models used in this domain, and a roadmap of future potential developments. To the best of our knowledge, this is the first comprehensive survey that covers image captioning, visual question answering, and image retrieval techniques that focus on using GNNs as the main part of their architecture.
Henry Senior, Gregory Slabaugh, Shanxin Yuan, Luca Rossi
2023-03-07T09:56:23Z
http://arxiv.org/abs/2303.03761v2
# Graph Neural Networks in Vision-Language Image Understanding: A Survey ###### Abstract 2D image understanding is a complex problem within Computer Vision, but it holds the key to providing human level scene comprehension. It goes further than identifying the objects in an image, and instead it attempts to _understand_ the scene. Solutions to this problem form the underpinning of a range of tasks, including image captioning, Visual Question Answering (VQA), and image retrieval. Graphs provide a natural way to represent the relational arrangement between objects in an image, and thus in recent years Graph Neural Networks (GNNs) have become a standard component of many 2D image understanding pipelines, becoming a core architectural component especially in the VQA group of tasks. In this survey, we review this rapidly evolving field and we provide a taxonomy of graph types used in 2D image understanding approaches, a comprehensive list of the GNN models used in this domain, and a roadmap of future potential developments. To the best of our knowledge, this is the first comprehensive survey that covers image captioning, visual question answering, and image retrieval techniques that focus on using GNNs as the main part of their architecture. Graph Neural Networks, Image Captioning, Visual Question Answering, Image Retrieval ## I Introduction Recent years have seen an explosion of research into Graph Neural Networks (GNNs), with a flurry of new architectures being presented in top-tier machine learning conferences and journals every year [1, 2, 3, 4, 5, 6]. The ability of GNNs to learn in non-Euclidean domains makes them powerful tools to analyse data where structure plays an important role, from chemoinformatics [7] to network analysis [8]. Indeed, these models can also be applied to problems not traditionally associated with graphs such as 3D object detection in LiDAR point clouds [9] and shape analysis [10]. GNN-based approaches have gained increasing popularity for solving 2D image understanding vision-language tasks, similar to other domains [11, 12, 13, 14, 15]. Whilst advances in this domain are discussed in [16], it is a wide ranging survey. Our work focuses specifically on vision-language and therefore covers these topics more extensively. We view 2D image understanding as the high level challenge of making a computer understand a two-dimensional image to a level equal to or greater than a human. Models that enable this should be able to reason about the image in order to describe it (image captioning), explain aspects of it (Visual Question Answering (VQA), or find similar images (image retrieval). These are all tasks that humans can do with relative ease, however, they are incredibly difficult for deep learning models and require a large amount of data. These tasks also fall under the category of vision-language problems, as they require the model to have an understanding of both the image pixels and a language (typically English) in which the models can express their understanding. Whilst there is a plethora of techniques that have been applied to these problems [17, 18, 19, 20, 21, 22, 23], this survey focuses on graph-based approaches. There are a range of graphs that are applicable, but the most widely used and understood is the semantic scene graph [24, 25]. This graph is constructed of nodes representing visual objects and edges representing the semantic relationships between them. The semantic graph as well as further graph types are discussed in Section II-C. Alongside a taxonomy of the graph types used across 2D image understanding tasks, this paper contributes a much needed overview of these approaches. Covering the three main tasks, we also include an overview of popular GNN techniques as well as insights on the direction of future GNN work. In the discussion section of this paper we argue that the increasingly popular Transformer architecture [26] is actually a special case GNN [27]. We expand upon this argument to suggest that GNNs should not be overlooked as they may offer better inductive biases for a range of tasks. Our main contributions are: 1) a taxonomy of the graph types used in 2D image understanding tasks; 2) a comprehensive survey of GNN-based approaches to common 2D image understanding tasks; 3) a roadmap of potential future developments for the community to explore. The remainder of this paper is organised as follows: Section II gives a taxonomy of the tasks discussed and their corresponding datasets, as well as an overview of the different graph types used throughout. Section III gives an overview of the common GNN architectures used. It also briefly mentions current and future research directions for GNNs and signposts appropriate surveys. The main body of the paper is formed of Sections IV, V, and VI, which detail GNN-based approaches to image captioning, VQA, and image retrieval, respectively. We then conclude the paper with a three part discussion, with Section VII-A covering the advantages that GNNs still offer despite the rapid adoption of the Transformer architecture. This is followed by Section VII-B which links the emerging field of latent diffusion and image generation to image captioning. Finally, Section VII-C concludes the paper and provides potential directions for future work. ## II Background and Definitions In this section, we outline the background required to view this survey in context. We first briefly define a generic graph before outlining the taxonomy of the field. Finally, we give an overview of the various graph types. ### _2D vision-language Tasks Taxonomies_ This paper follows the taxonomies of [28, 29, 30, 31] and joins them together for a more complete overview of 2D vision-language tasks (see Figure 1). This section gives a brief overview of the existing taxonomies and highlights the sections of them this survey focuses on. It also highlights the main datasets used for various tasks discussed in the paper, these are summarised in Table I. Whilst individual vision-language tasks have their own unique datasets, they are unified by the Visual Genome [32], an expansive dataset that provides ground truths for a range of vision-language tasks. As the most generic dataset, it has \(33,877\) object categories and \(68,111\) attribute categories. At the time of its publication this was the largest and most dense dataset containing image descriptions, objects, attributes, relationships, and question answer pairs. Additionally, the Visual Genome also contains region graphs, scene graphs, and question-answer pairs. This results in it being a very wide ranging dataset with lots of applications in visual cognition tasks such as scene graph generation [40] and VQA [41]. For image captioning, we follow [28] who identify three main approaches: 1) retrieval-based captioning, 2) template-based captioning, and 3) deep learning-based captioning. Retrieval-based captioning is built on the assumption that for every image, a caption exists, and needs to be retrieved from a bank of existing captions. It was the foundation of early image captioning approaches [17] and yielded good results without the need for deep learning. However, not all images may have appropriate captions. If the captions are generic, they will only be able to describe aspects of an image and may omit its most important feature. In contrast, template-based captioning [42] uses a pre-defined caption format and uses object detection to fill in the blanks. This approach is good for generating consistent captions, but can result in captions that are unnatural and clearly generated by a machine. Contemporary approaches to the task of image captioning are based on deep learning models. Early work focused on a CNN encoder feeding an RNN-based decoder [43], however more recent deep learning approaches have developed to incorporate a wide variety of techniques including GNNs [25, 44] and Transformers [45, 46]. In this survey, we focus specifically on deep learning approaches to image captioning, and focus on graph-based approaches. Deep learning approaches are typically trained on the COCO [33] or Flickr30k [34] which contain a set of images accompanied by five human generated captions. Taxonies of VQA are usually defined through the lens of the datasets used by the various tasks [29, 30]. Here we focus on 1) the standard VQA task of answering a question about an image, 2) the fact-based VQA (FVQA) task of answering questions that require external knowledge to answer, and 3) text-VQA, the task of answering questions that require the model to read text in the scene and combine it with visual data. Each of the various VQA tasks have their own set of specialised datasets. The original VQA dataset [35], and the subsequently updated VQA 2.0 [47] dataset address the original task of answering questions based on the visual information in the image. The FVQA dataset [36] is built using images from ImageNet [48] and COCO [33] alongside facts from DBPedia [49], ConceptNet [50], and WebChild [51]. The images have three forms of visual concepts extracted from them using a range of models. These visual concepts include objects (items identified in the image), scene (scene level features such as room label), and actions. Question-answer pairs were generated by human annotators who selected a visual concept and an accompanying fact triplet which they used to generate a question. Finally, the text-KVQA dataset [39] was built by compiling images from a Kaggle movie poster challenge1, [52], and Google Image search results from combining brand names with postfixes such as "store" or "building." This collection of images was then given to human Fig. 1: 2D vision-language task taxonomy. annotators who removed images that did not contain text of brand names. The result is a dataset of 257K images with three groupings: book, movie, and scene. Accompanying these images are 1.3 million question-answer pairs. Each image grouping gets its own triplet-based knowledge base from a relevant source: WikiData [53], IMBd, and [52] respectively. Image retrieval spans multiple tasks, all of which make use of deep learning in contemporary approaches. We follow the taxonomy of Alexander _et al_. [31] and address the following sub tasks: text-based image retrieval, content-based image retrieval, sketch-based retrieval, semantic-based retrieval, and annotation-based retrieval. The number of datasets used for image retrieval are vast and the community has not solidified around a single dataset in the way image captioning has around COCO [33]. This presents a challenge when making accurate comparisons between systems as the challenge presented by different datasets varies complicating direct comparisons across datasets. Whilst image retrieval specific datasets exist [54], there are papers [55, 56, 57] that make use of image captioning datasets [33, 34], showing the wide range of varied datasets that exist for image retrieval. ### _Fundamental Graph Theoretical Concepts_ **Undirected Graph.** We define an undirected graph \(G\) to be a tuple of sets \(V\) and \(E\), i.e., \(G=(V,E)\). The set \(V\) contains \(n\) vertices (sometimes referred to as nodes) that are connected by the edges in the set \(E\), i.e., if \(v\in V\) and \(u\in V\) are connected by an edge then \(e_{v,u}\in E\). For an undirected graph we have that \(e_{v,u}=e_{u,v}\). **Directed Graph.** A directed graph is a graph where the existence of \(e_{v,u}\) does not imply the existence of \(e_{u,v}\) as well. Let \(A\) be the \(n\times n\) binary adjacency matrix such that \(A_{v,u}=1\) if \(e_{v,u}\in E\). Then it follows that \(A\) is asymmetric (symmetric) for directed (undirected) graphs. More in general, \(A\) can be a real-valued matrix, where the value of \(A_{v,u}\) can be interpreted as the strength of the connection between \(v\) and \(u\). **Neighbourhood.** The neighbourhood \(\mathcal{N}(v)\) of a vertex \(v\in V\) is the subset of nodes in \(V\) that are connected to \(v\). The neighbour \(u\) can be either directly connected to \(v\), i.e., \((v,u)\in E\), or it can be indirectly connected by traversing \(r\) edges from \(v\) to \(u\). Note that some definitions include \(v\) itself as part of the neighbourhood. **Complete Graph.** A complete graph is one (directed or undirected) where for each vertex, there is an edge connecting it to every other vertex in the set \(V\). A complete graph is therefore a graph with the maximum number of edges for a given number of nodes. **Multipartite Graph.** A multipartite graph (also known as \(K\)-partite graph) is a graph where the nodes can be separated into \(K\) different sets. For scene understanding tasks, this allows for a graph representation where one set of nodes represent objects and another represents relationship between objects. **Multimodal Graph.** A multimodal graph is one with nodes that have features from different modalities. This approach is commonly used in VQA where the image and text modalities are mixed. Multimodal graphs enable visual features to coexist in a graph with word embeddings. ### _Common Graph Types in 2D vision-language Tasks_ This section organises the various graph types used across all three tasks discussed in the survey. Some graphs, such as the semantic and spatial graphs, are used across all tasks [25, 41, 56], while others are more domain specific, like the knowledge graph [58, 39]. Figure 2 shows a sample image from the COCO dataset [33] together with various types of graphs that can be used to describe it. This section, alongside the figure, is organised so that graph that represent a single image and graphs that represent portions the dataset are grouped together. **Semantic Graph.** Sometimes referred to as a scene graph, a semantic graph (shown in Figure 1(c)) is a one that encapsulates the semantic relationships between visual objects within a scene. Across the literature, the terms'semantic graph' and'scene graph' are used somewhat interchangeably, depending on the paper. However, in this survey we use the term'semantic graph' because there are many ways to describe a visual scene as a graph, whereas the'semantic graph' label is more precise about what the graph represents. Semantic graphs come in different flavours. One approach is to define a directed graph with nodes representing visual objects extracted by an object detector such as Faster-RCNN [59] and edges representing semantic relationships between them. This is the approach of Yao _et al_. [25], where, using a dataset such as Visual Genome [32], a model predicts the semantic relationships to form edges in the graph. Alternatively, the semantic graph can be seen as a multipartite graph [60, 61, 44, 62] (shown in Figure 1(d)), where attribute nodes describe the object nodes they are linked to. They also change the way relationships are represented by using nodes rather than edge features. This yields a semantic graph with three node types: visual object, object attribute, and inter-object relationship. This definition follows that of the'scene graph' defined by Johnson _et al_. [24]. Finally, another form of semantic graph exists, the textual semantic graph[63, 44] (shown in figure 2f). Unlike visual semantic graphs, textual ones are not generated from the image itself but rather its caption. Specifically, the caption is parsed through the Stanford Dependency Parser [64], a widely used [65, 66] probabilistic sentence parser. Given a caption, the parser will return its grammatical structure, identifying components such nouns, verbs, and adjectives and marking the relationship between them. This is then modified from a tree into a graph, following the techniques outlined in [67]. **Spatial Graph.** Yao _et al_. [25] define a spatial graph (Figure 2g) as one representing the spatial relationship between objects. Visual objects detected by an object detector form nodes, and the edges between the nodes represent one of 11 pre-defined spatial relationships that may occur between the two objects. These include inside (labelled '1'), cover (labelled '2'), overlap (labelled '3'), and eight positional relationships (labelled '4'-'11') based on the angle between the centroid of the two objects. These graphs are directional but will not always be complete as there are cases where two objects have a weak spatial relationship and are therefore not connected by an edge in the spatial graph. Guo _et al_. [61] define a graph of a similar nature known as a geometry graph. It is defined as an undirected graph that encodes relative spatial positions between objects with an overlap and relative distance that meet certain thresholds. **Hierarchical Spatial.** These graphs build on from the spatial graph but the relationships between nodes focus on the hierarchical nature of the spatial relationship between the detected objects within an image. Yao _et al_. [68] propose to use a tree (i.e., a graph where each pair of nodes is connected by a single path) to define a hierarchical image representation. An image (\(\mathcal{I}\)) is first divided into regions using Faster-RCNN [59] (\(\mathcal{R}=\{r_{i}\}_{i=1}^{K}\)) with each region being further divided into instance segmentations (\(\mathcal{M}=\{m_{i}\}_{i=1}^{K}\)). This gives a three-layer tree structure (\(\mathcal{T}=(\mathcal{I},\mathcal{R},\mathcal{M},\mathcal{E}_{tree})\), where \(\mathcal{E}_{tree}\) is the set of connecting edges) to represent the image, as shown in Figure 2e. He _et al_. [46] use a hierarchical spatial graph, with relationships representing 'parent', 'child', and 'neighbour' relationships depending on the intersection over union of the bounding boxes. **Similarity Graph.** The similarity graph (Figure 2h) proposed by Kan _et al_.[69] (referred to as a semantic graph by the authors) is generated by computing the dot product between two visual features extracted by Faster-RCNN [59]. The dot products are then used to form the values of an adjacency matrix \(A\) as the operation captures the similarity between two vectors, the higher the dot product, the closer the two vectors are. Faster-RCNN extracts a set of \(n\) visual features, where each feature \(x(v)\) is associated to a node \(v\) and the value of the edge between two nodes \(v\) and \(u\) is given by \(A_{u,v}=\sigma\left(x(v)^{T}Mx(u)\right)\), where \(\sigma(\cdot)\) is a non-linear function and \(M\) is a learnt weight matrix. The authors of [69] suggest that generating the graph this way allows for relationships between objects to be discovered in a data-driven manner, rather than relying on a model trained on a dataset such as the Visual Genome [32]. **Image Graphs/\(K\)-Nearest Neighbour Graph.** In their 2021 image captioning work, Dong _et al_. [70] construct an image graph by converting images into a latent feature space by averaging the object vectors output by feeding the image into Faster-RCNN [59]. The \(K\) closest images from the training data or search space in terms of \(l_{2}\) distance are then turned into an undirected complete graph, shown in Figure 2i. This is a similar approach used by Liu _et al_. [71] with their \(K\)-nearest neighbour graph. **Topic Graph.** Proposed by Kan _et al_. [69], the topic graph is an undirected graph of nodes representing topics extracted by GPU-DMM [72]. Topics are latent features representing shared knowledge across the entire caption set. Modelling them as a graph, as shown in Figure 2j, with edges computed by taking the dot product of the two nodes, allows the modelling of knowledge represented in the captions. **Region Adjacency Graph.** Defined in [73], a Region Adjacency Graph uses a superpixel segmentation. Superpixels form the nodes of the graph and edges are added to connect adjacent region pairs. Edges are then weighted to represent how compatible the two adjacent regions are. **Knowledge Graph.** A knowledge graph, or fact graph, is a graph-based representation of information. Whilst there is no agreed structure of these graphs [74], they typically take the form of triplets. They are used in a wide variety of tasks to provide the information needed to "reason". Hence, knowledge graphs enable the FVQA task. ## III An Overview of Graph Neural Networks Over the past years a large number of GNN architectures have been introduced in the literature. Wu _et al_. [75] proposed a taxonomy containing four distinct groups: recurrent GNNs, convolutional GNNs, autoencoder GNNs, and spatial-temporal GNNs. The applications discussed in this paper mostly utilise convolutional GNNs, for a comprehensive overview of other architectures readers are directed to [75]. GNNs, especially traditional architectures such as Graph Convolutional Network, have a deep grounding in relational inductive biases [27]. They are built on the assumption of homophily, i.e. that connected nodes are similar. ### _Graph Convolutional Networks (GCNs)_ One common convolutional GNN architecture is the Message Passing Neural Networks (MPNNs) proposed by Gilmer _et al_. Although this architecture has been shown to be limited [76], it forms a good abstraction of GNNs. Gilmer _et al_. describe MPNNs as being comprised of a message function, update function, and readout function. These functions will vary depending on the application of the network, but are learnable, differentiable, and permutation invariant. The message and update functions will run for a number of time steps \(T\), passing messages between connected nodes of the graph. These are used to update the hidden feature vectors of the nodes, which are then used to update the node feature vector, which in turn is used in the readout function. Fig. 2: A visual comparison of the various graph types used across vision-language tasks. Best viewed in colour. The messages are defined as \[\bar{m}_{v}^{(t+1)}=\sum_{u\in\mathcal{N}(v)}M_{t}(\bar{h}_{v}^{(t)},\bar{h}_{u}^{ (t)},\bar{e}_{v,u})\,, \tag{1}\] where a message for a node at the next time step \(\bar{m}_{v}^{(t+1)}\) is given by combining its current hidden state \(\bar{h}_{v}^{(t)}\) with that of its neighbour \(\bar{h}_{u}^{(t)}\) and any edge feature \(\bar{e}_{v,u}\) in a multilayer perceptron (MLP) \(M_{t}(\cdot)\). Given that a message is an aggregation of all the connected nodes, the summation acts over the nodes connected to the node \(u\in\mathcal{N}(v)\), i.e., the neighbourhood of \(v\). These messages are then used to update the hidden vectors by combining the node current state with the message in an MLP \(U_{t}\). \[\bar{h}_{v}^{(t+1)}=U_{t}(\bar{h}_{v}^{t},\bar{m}_{v}^{(t+1)}) \tag{2}\] Once the message passing phase has run for \(T\) time steps, a readout phase is then conducted using a readout function, \(R(\cdot)\). This stage makes use of an MLP that considers the updated feature vectors of nodes on the graph to produce a prediction and is defined as: \[\hat{y}=R(\{\bar{h}_{v}^{T}|\bar{v}\in G\}) \tag{3}\] In order to make the GCN architecture scale to large graphs, the GraphSAGE [77] architecture changes the message function. Rather than taking messages from the entire neighbourhood of a node, a random sample is used. This reduces the number of messages that require processing, resulting in an architecture that works well on large graphs. ### _Gated Graph Neural Networks_ The core idea behind the Gated Graph Neural Network (GGNN) [78] is to replace the update function from the message passing architecture (Equation 2) with a Gated Recurrent Unit (GRU) [79]. The GRU is a recurrent neural network with a update and reset gates that controls which data can flow through the network (and be retained) and which data cannot (and therefore be forgotten). \[\bar{h}_{v}^{(t+1)}=GRU(\bar{h}_{v}^{(t)},\sum_{w\in\mathcal{N}(v)}\mathbf{W }\bar{h}_{w}^{(t)})\,. \tag{4}\] The GGNN also replaces the message function from Equation 1 with a learnable weight matrix. Using the GRU alongside back-propagation through time enables the GGNN to operate on series data. However, due to the recurrent nature of the architecture, it can become unfeasible in terms of memory to run the GGNN on large graphs. ### _Graph Attention Networks (GATs)_ Following on from the multi-head attention mechanism of the popular Transformer architecture [26], Graph Attention Networks (GATs) [80] extend the common GCN to include this attention attribute. Using an attention function, typically modelled by an MLP, the architecture calculates an attention weighting between two nodes. This process is repeated \(K\) times using \(K\) attention heads in parallel. The attention scores are then averaged to give the final weights. The self-attention is computed by a function \(a(h_{v}^{t},h_{w}^{t})\) (typically an MLP) that attends to a node and one of its neighbours. Once every node pairing in the graph has their attention computed, the scores are passed through a softmax function to give a normalised attention coefficient. This process is then extended to multi-head attention by repeating the process across \(K\) different attention heads, each with different initialisation weights. The final node representation is achieved by concatenating or averaging (represented as \(\|\)) the \(K\) attention heads together. \[\bar{h}_{v}^{(t+1)}=\left\|_{k=1}^{K}\sigma(\sum_{w\in\mathcal{N}(v)}\alpha_{v,w}^{(k)}\mathbf{W}^{(k)}\bar{h}_{w})\right. \tag{5}\] ### _Graph Memory Networks_ Recent years have seen the development of Graph Memory Networks, which can conceptually be thought of as models with an internal and external memory. When there are multiple graphs overlapping the same spatial information, as in [81], the use of some form of external memory can allow for an aggregation of node updates and the graph undergoes message passing. This essentially allows for features from multiple graphs to be combined in some way that goes beyond a more simplistic pooling operation. In the case of Khademi [81], two graphs are constructed across the same image but may have different nodes. These graphs are updated using a GGNN. An external spatial memory is constructed to aggregate information from across the graphs as they are updated, using a neural network with an attention mechanism. The final state of the spatial memory is used to perform the final task. ### _Modern Graph Neural Network Architectures_ In recent years, the limits of message passing GNNs have become increasingly evident, from their tendency to over-smooth the input features as the depth of the network increases [82] to their unsatisfactory performance in heterophilic settings [83], i.e., when neighbouring nodes in the input graphs are dissimilar. Furthermore, the expressive power of GNNs based on the message passing mechanism has been shown to be bounded by that of the well-known Weisfeiler-Lehman isomorphism test [76], meaning that there are inherent limits to their ability to generate different representations for structurally different input graphs. Motivated by the desire to overcome these issues, researchers have now started looking at alternative models that move away from standard message passing architectures. Efforts in this direction include, among many others, higher-order message passing architectures [84], cell complexes networks [85], networks based on diffusion processes [86, 2, 83]. To the best of our knowledge, the application of these architectures to the 2D image understanding tasks discussed in this paper has not been explored yet. As such, we refer the readers to the referenced papers for detailed information on the respective architectures. ## IV Image Captioning Image captioning is the challenging task of producing a natural language description of an image. Outside of being an interesting technical challenge, it presents an opportunity to develop accessibility technologies for severely sight impaired (formally 'blind') and sight impaired users (formally 'visually impaired' 2). Additionally, it has applications in problems ranging from image indexing [87] to surveillance [69]. There are three forms of image captioning techniques: 1) retrieval-based captioning, where a caption is retrieved from a set of existing captions, 2) template-based captioning, where a pre-existing template is filled in using information extracted from the image, 3) and Deep Learning-based image captioning, where a neural network is tasked with generating a caption from an input image. We propose to refine this taxonomy to differentiate between GNN-based approaches and more traditional Deep Learning powered image captioning. The following section details the GNN-based approaches to image captioning, of which there have been a number of in recent years. Figure 3 illustrates the structure of a generic GNN-based image captioning architecture. GNN-based approaches to image captioning all follow the traditional Encoder-Decoder-based approach common in Deep Learning image captioning techniques. Images first undergo object detection, the output of which is used to create an encoding. These encodings are then decoded, traditionally with a long short-term memory network (LSTM), into a caption. Through incorporating GNNs, researchers have been able to enhance the encoded image representation by incorporating spatial and semantic information into the embeddings. As the task of image captioning has developed over time, so have the evaluation metrics used to assess the performance of proposed architectures. Originally, image captioning relied heavily on machine translation evaluation techniques such as BLEU [88], ROUGE [89], and METEOR [90] as no image captioning specific metric existed. However, this changed with the introduction of both CIDEr [91] and SPICE [67]. The performance metrics are detailed in Table II. The first architecture to use a GNN to improve image captioning was by Yao _et al._[25]. In their work, they propose the use of a GCN to improve the feature embeddings of objects in an image. They first start by applying a Faster R-CNN object detector [59] to the image in order to extract feature vectors representing objects. These feature vectors are then used to create two graphs: a bidirectional spatial graph encoding spatial relationships between objects and a directed semantic graph which encodes the semantic relationships between objects. A GCN is then applied to both graphs before the enhanced features of the graphs undergo mean pooling. They are then decoded by an LSTM into a caption. As the whole graphs are used to inform the caption generation, it may lead to scenarios where dense graphs lead to redundant or low value information being included in the caption. Zhong _et al._[60] focus solely on a semantic scene graph and address the problem of which nodes and edges to include in the final caption. This is challenging for scenes containing a lot of detected objects as the semantic scene graphs can become relatively large. The problem is addressed by decomposing the semantic graph into various subgraphs that cover various parts of the image. They are then scored using a function trained to determine how closely the subgraph resembles the ground truth caption. This enables the selection of subgraphs from the main scene graph that will go on to generate useful captions. The starting semantic graph is generated by MotifNet [92] (a common off-the-shelf semantic graph generator). Zhong _et al._[60] make use of a GCN to aggregate neighbourhood information of the proposed sub-graph. Unlike Yao _et al._, the authors of [60] use only a semantic graph. They focus on the link between the language and semantic graph and do not make use of spatial information. Another work that makes use of the semantic graph is that of Song _et al._[93]. They investigate how both implicit and explicit features can be utilised to generate accurate and high quality image captions. The authors define implicit features as representing global interactions between objects and explicit features as those defined on a semantic graph. For the latter, rather than using multiple graphs, [93] only uses a single semantic graph. However, rather than predicting the graph directly via MotifNet [92] as in other works [60], its construction starts with a spatial graph. After object detection, a fully connected directed graph is generated between the objects (with nodes being represented by the object feature vector). The edges of this graph are then whittled away in a two step process. Firstly, edges between objects that have zero overlap (measured as intersection over union) and an \(l_{2}\) distance less than the longest side of either objects bounding box are removed. The remaining edges are used to determine which object pairs have their relationship detected by MotifNet [92]. Those relationships with a high enough probability are kept whilst the others are removed. This results in a semantic graph that indirectly contains spatial information, going beyond the semantic graph of [60]. The final graph is then processed by a GGNN, the output of which is a representation of the explicit features. The implicit features are generated by a Transformer encoder [26]. The entire image, alongside the regions within the detected object bounding boxes are encoded. These features are then used alongside those of the explicit features as input to an LSTM language decoder that is used to generate the final caption. The work demonstrates the successes possible when using GNNs alongside Transformers, using their different inductive biases to best model different interactions (see Table III). However, both the implicit and explicit relationships remain local to a single image. Further work could consider how often certain relationships occur over the entire dataset. Guo _et al._[61] took a very similar approach to Yao _et al._[25] with their work, utilising a dual graph architecture containing a semantic and spatial graph. However, they make the observation that images can be represented by a collection of Visual Semantic Unit (VSU) vectors, which represent an object, its attributes, and its relationships. These VSUs are combined into a semantic graph that models relationships as nodes rather than edge features and adds attribute nodes concented to objects, thus making it multipartite. Doing so gives the graph a closer resemblance to the captions it will go on to generate as objects map to nouns, relationships to verbs and prepositions, and finally attributes to adjectives. The authors argue that this approach allows the model to explicitly learn relationships and model them directly. As argued in [61], a scene graph of an image has a close mapping to the image caption. Nodes representing objects map directly to nouns, edge features (in the case of [25]) or nodes (in the case of [61]) that encode relationships map clearly to prepositions, and nodes representing attributes map to adjectives. This strong relationship between the graph structure generated by the encoder and the final sentence outputted by the decoder further supports the use of the image-graph-sentence architecture used by many image captioning systems. Zhou _et al_. [62] use an LSTM alongside a Faster-RCNN [59] based image feature extractor, with the addition of a visual self-attention mechanism. The authors make use of a multipartite semantic scene graph, following the style of [61, 24]. Specifically, they propose to use three GCNs to create context aware feature vectors for each of the object, attribute, and relationship nodes. The resulting context aware nodes undergo fusion with the self attention maps, enabling the model to control the granularity of captions. Finally, the authors test two methods of training an LSTM-based language generator, the first being a traditional supervised approach with cross entropy loss, the second being a reinforcement learning-based approach that uses CIDEr [91] as the reward function. By utilising context dependent GCNs in their architecture, to specifically account for the object, attribute, and relationship nodes, SASG is able to achieve competitive results when compared with similar models, as shown in Table III. SGAE (Scene Graph Auto-Encoder) is another paper to make use of a multipartite semantic graph. In the paper, Yang Fig. 3: An abstract overview of GNN-based image captioning architectures discussed in this section. Most architectures extract image features and use them to construct at least one graph to represent the image. Some papers [70, 69] build higher level graphs at an image level rather than an object level. A GNN is then applied to these graphs and the resulting features are fed into a language generator that creates an appropriate caption for the image. Traditionally this was an LSTM, but more recently the trend is to use Transformers [26]. Best viewed in colour. _et al._[44] take a caption and convert it into a multipartite textual semantic graph using a similar process to that of the SPICE metric [67] (detailed further in Table II). The nodes of the graph are converted to word embeddings which are then converted into feature embeddings by way of a GCN, with each node type being given its own GCN with independent parameters. These feature embeddings are then combined with a dictionary to enable them to be re-encoded before they are used to generate a sentence. The dictionary weights are updated via back-propagating the cross entropy loss from the sentence regeneration. By including a dictionary, the authors are able to learn inductive biases from the captions. This allows generated captions to go from "man on motorcycle" to "man riding motorcycle". When given an image, SGAE generates a multipartite visual semantic graph, similar to [61, 24], using Faster-RCNN [59] and MotifNet [92]. These visual features are then combined with their word embeddings through a multi-modal GCN and then re-encoded using the previously learnt dictionary. These features are then used to generate the final sentence. Rather than utilising multiple graphs, Wang _et al_. [94] instead use a single fully connected spatial graph with an attention mechanism to learn the relationships between different regions. This graph is formed of nodes that represent the spatial information of regions within the image. Once formed, it is passed through a GGNN [78] to learn the weights associated with the edges. Once learnt, these edge weights correspond to the probability of a relationship existing between the two nodes. The work of Yao _et al_. [68], following on from their GCN-LSTM [25], presents an image encoder that makes use of a novel HIerarchy Parsing (HIP) architecture. Rather than encoding the image in a traditional scene graph structure like most contemporary image captioning papers [25, 60, 70], Yao _et al_. [68] take the novel approach of using a tree structure (discussed in Section II-C), exploiting the hierarchical nature of objects in images. Unlike their previous work which focused on the semantic and spatial relationships, this work is about the hierarchical structure within an image. This hierarchical relationship can be viewed as a combination of both semantic and spatial information - therefore merging the two graphs used previously. The feature vectors representing the vertices on the tree are then improved through the use of Tree-LSTM [95]. As trees are a special case graph, the authors also demonstrate that their previous work GCN-LSTM [25] can be used to to create enriched embeddings from the tree before decoding it with an LSTM. They demonstrate that the inclusion of the hierarchy passing improves scores on all benchmarks when compared with GCN-LSTM [25], which does not use hierarchical relationships. The work of He _et al_. [46] build on the idea of a hierarchical spatial relationships proposed by Yao _et al_.[68]. However, rather than use a tree to represent these relationships, they use a graph with three relationship types: parent, neighbour, and child. They then propose a modification to the popular Transformer layer to better adapt it to the task of image processing. After detecting objects using Faster-RCNN [59], a hierarchical spatial relationship graph is constructed. Three adjacency matrices are then built from this graph to model the three relationship types (\(\Omega_{p},\Omega_{n},\Omega_{c}\) respectively). The authors modify the Transformer layer so that rather compute self-attention across the whole spatial graph, there is a sub-layer for each relationship type. Each sub-layer processes the query \(Q\) with its own key \(K_{i}\) and value \(V_{i}\) with the modified attention mechanism: \[Attention(Q,K_{i},V_{i})=\Omega_{i}\odot Softmax\left(\frac{QK_{i}^{T}}{\sqrt{ d}}\right)V_{i} \tag{6}\] Where \(\odot\) is the Hadamard product and \(i\) refers to the relationship type \(i\in\{parent,neighbour,child\}\). Using the Hadamard product essentially zeroes out the attention between regions whose relationship is not being processed by that sub-layer. The resulting encodings are decoded by an LSTM to produce captions. Like [46], the \(\mathcal{M}2\) meshed memory Transformer proposed by Cornia _et al_. [45] also makes use of the increasingly popular Transformer architecture [26]. Unlike other papers [25, 68, 44, 46] which make use of some predefined structure on extracted image features (spatial graph, semantic graph, etc), \(\mathcal{M}2\) uses stacks of self-attention layers across the set of all the image regions. The standard key and values from the Transformer are edited to include the concatenation of learnable persistent memory vectors. These allow the architecture to encode a-priori knowledge such as 'eggs' and 'tosat' make up the concept 'breakfast'. When decoding the output of the encoder, a stack of self-attention layers is also used. Each decoder layer is connected via a gated cross attention mechanism to each of the encoder layers, giving way to the "meshed" concept of the paper. The output of the decoder block is used to generate the final output caption. The authors of [69] propose using a novel similarity (referred to as a semantic in the paper) and topic graphs. Built on dot product similarity, the graphs are produced without the requirement of graph extraction models such as MotifNet [92]. Rather, a set of vertices \(V=\{v_{i}\in\mathbb{R}^{d_{obj}}\}_{i=1}^{n_{obj}}\) are extracted as ResNet features from a Faster-RCNN object detector [59]. Edges in the adjacency matrix are then populated using the dot product between the feature vectors in \(V\) with \(a_{ij}=\sigma(v_{i}^{T}Mv_{j})\). Once both graphs have been constructed, a GCN is applied to both in order to enrich the nodes with local context. A graph self-attention mechanism is then applied to ensure nodes are not just accounting for their immediate neighbours. The improved graphs are then decoded via an LSTM to generate captions. Following [25], Dong _et al_. [70] use a spatial graph to show a directed relationship between detected objects within the input image. Locally, object features are extracted by a CNN to associate a vector to each vertex of the spatial graph. This process is completed for each image in the dataset. In addition to this graph, the authors introduce an image level graph. Specifically, each image is represented by a feature vector that is the average of its associated set of object feature vectors. The image graph for a corresponding image is formed as a fully connected undirected graph of the \(K\) images whose \(l_{2}\) distance is the closest to the input image. Both the local spatial graph and the more global image level graph are processed by GCNs to create richer embeddings that can be used for caption generation. This approach is shown to work extremely well, with Dual-GCN achieving outperforming comparable models in the BLEU, METEOR, and ROGUE metrics (see Table III). ## V Visual Question Answering VQA is the challenging task of designing and implementing models that are able to answer natural language questions about a given image. These answers can range from simple yes/no to more natural, longer form answers. Questions can also vary in complexity. As the field has developed, more specific VQA tasks have emerged. The first to emerge was FVQA, sometimes known as Knowledge Visual Question Answering (KVQA), where external knowledge sources are required to answer the questions. Another task that has emerged is Textual VQA, where the models must understand the text within the scene in order to generate answers. All three tasks have their own datasets [35, 32, 38, 36, 39] and have an active community developing solutions [35, 65, 81]. ### _Vqa_ Originally proposed in [35], VQA has developed beyond simple 'yes' or 'no' answers to richer natural language answers. A common thread of work is to leverage the multimodal aspect of VQA and utilise both visual features from the input image and textual features from the question [65, 81, 66]. One of the first works in VQA to make use of GNNs was that of Teney _et al._[65]. Their work is based on the clip art focused dataset [35]. Their model takes a visual scene graph as input alongside a question. The question is then parsed into a textual scene graph using the Stanford Dependency Parser [64]. These scene graphs are then processed independently using a GGNN [78] modified to incorporate an attention mechanism. The original feature vectors are then combined using an attention mechanism that reflects how relevant two nodes from the scene graphs are to one another. Khademi [81] takes a multimodal approach to VQA by using dense region captions alongside extracted visual features. Given a query and input image, the model will first extract visual regions using a Faster-RCNN object detector and generated a set of features using ResNet and encoding the bounding box information into these features. An off-the-shelf dense region captioning model is also used to create a set of captions and associated bounding boxes. The captions and bounding box information are encoded using a GRU. Each set of features is turned into a graph (visual and textual respectively) with outgoing and incoming edges existing between features if the Euclidean distance between the centre of the normalised bounding boxes is less than \(\gamma=0.5\). Both graphs are processed by a GGNN with updated features being used to update an external spatial memory unit - thus making the network a Graph Memory Network (described in Section III-D). After propagating the node features, the final state of the external spatial memory network is turned into a complete graph using each location as a node. This final graph is processed by a GGNN to produce the final answer. The multimodal approach presented in this paper is shown to be highly effective when compared to similar VQA methods. This approach is shown to work extremely well in benchmarks, with the proposed MN-GMN architecture [81] performing favourably with comparable models (Table IV). MORN [66] is another work that focuses on capturing the complex multi-modal relationships between the question and image. Like many recent works in Deep Learning, it adopts the Transformer [26] architecture. Built with three main components, the model first creates a visual graph of the image starting from a fully connected graph of detected objects and a GCN is used to aggregate the visual features. The second part of the model creates a textual scene graph from the input question. Both graphs are merged together by the final component of the model, a relational multi-modal Transformer, which is used to align the representations. Sharma _et al._[96] follow the vision-language multi-modal approach but diverge from the use of a textual semantic graph and instead opt to use word embeddings. The authors utilise a novel GGNN-based architecture that processes an undirected complete graph of nodes representing visual features. Nodes are weighted with the probability that a relationship occurs between them. In line with other VQA work [81], the question is capped to \(14\) words, with each one being converted into GloVe embeddings [97]. Questions with fewer than \(14\) words are padded with zero-vectors. A question embedding is then generated using a GRU applied to the word embeddings. An LSTM-based attention mechanism considers both the question vector and the visual representations making up the nodes of the scene graph. This module considers previously attended areas when exploring new visual features. Finally, an LSTM-based language generator is used to generate the final answer. Another work to forgo using a textual scene graph, Zhang _et al._[41] make use of word vectors to embed information about the image into a semantic graph. Using a GNN, they are able to create enriched feature vectors representing the nodes, edges, and an image feature vector representing the global state. They include the question into the image feature by averaging the word vectors, which enables the GNN to reason about the image. Whilst both [96] and [41] yield good results, by only using word or sentence level embeddings and not using a textual scene graph, they fail to model relationships in the textual domain. This therefore removes the ability for the models to reason in that domain alone. Both Li _et al._[98] and Nuthalapati _et al._[99] take a different route to the established multi-modal approach and instead use different forms of visual information. Li _et al._[98] take inspiration from [25] and make use of both semantic and spatial graphs to represent the image. In addition to these explicit graphs, they also introduce an implicit graph, i.e., a fully connected graph between the detected objects with edge weights set by a GAT. The relation-aware visual features are then combined with the question vector using multi-modal fusion. The fused output is then used to predict an answer via an MLP. Nuthalapati _et al._[99] use a dual scene graph approach, using both visual and semantic graphs. These graphs are merged into a single graph embedding using a novel GAT architecture [80] that is able to attend to edges as well as nodes. The graphs are enriched with negative entities that appear in the question but not the graph. Pruning then takes place to remove nodes and edges that are \(K\) hops away from features mentioned in the question. A decoder is then used to produce an answer to the inputted question. ### _Knowledge/Fact-Based VQA_ Knowledge or Fact-Based VQA is the challenging task of making use of external knowledge given in knowledge graphs such as WikiData [53] to answer questions about an image. The major challenge of this task is to create a model that can make use of all three mediums (image, question, and fact) to generate an appropriate answer. The MUCKO [100] architectural diagram shown in Figure 4 (reused with permission), is shown as a representative example of models that approach FVQA. In [101], the authors present a novel GCN-based architecture for FVQA. Alongside the question and answer sets, a knowledge base of facts is also included, \(KB=\{f_{1},f_{2},...,f_{|KB|}\}\). Each fact \(f=(x,r,y)\) is formed of a visual concept grounded in the image (\(x\)), an attribute or phrase (\(y\)), and a relation linking the two \(r\). Relationships exist in a predefined set of 13 different ways a concept and attribute can be related. Their work first reduces the search space to the 100 facts most likely to contain the correct answer by using GloVe embeddings [97] of words in the question and facts before further reducing it to the most relevant facts \(f_{rel}\). These most relevant facts are turned into a graph where all the visual concepts and attributes from \(f_{rel}\) form the nodes. An edge joins two nodes if they are related by a fact in \(f_{rel}\). A GCN is then used to'reason' over the graph to predict the final answer. Using a message passing architecture, the authors are able to update the feature representations of the nodes which can then be fed into an MLP which predicts a binary label corresponding to whether or not the entity contains the answer. Zhu _et al_. [100] use a multi-modal graph approach to representing images with a visual, semantic, and knowledge graph. After graph construction, GCNs are applied to each modality to create richer feature embeddings. These embeddings are then processed in a cross-modal manner. Visual-Fact aggregation and Semantic-Fact aggregation operations produce complimentary information which is then used with a Fact-Fact convolutional layer. This final layer takes into account all three modalities and produces an answer that considers the global context. The authors continue their work in [58] by changing the cross-modal mechanism for a novel GRUC (Graph-based Read, Update, and Control) mechanism. The GRUC operates in a parallel pipeline. One pipeline starts with a concept from the knowledge graph and recurrently incorporates knowledge from the visual graph. Another starts with the same knowledge graph concept but incorporates semantic knowledge. At the end of the recurrent operations, the outputs of the two pipelines are fused together with the question and original fact node. This fused feature is then used to predict the final answer. The change made to the cross-modal attention mechanism yields significant improvements in the F-VQA benchmark when compared with MUCKO [100]. Liu _et al_. [102] also adopt a multi-modal approach, but use only the semantic and knowledge modalities. They propose a dual process system for FVQA that is based on the Dual-Process Theory from Cognitive Science [103]. Their approach utilises a BERT encoder to represent the input question and a Faster-RCNN [59] based feature extractor to represent the image features. The first of the two systems, based on the Transformer architecture [26], joins these two representations into a single multi-modal representation. The second system then develops a semantic graph by turning dense region captions into textual scene graphs (using SPICE), as well as a knowledge graph generated using the question input. A message passing GNN is then used to identify the important nodes and aggregate information between them using an attention weighting. A joint representation for each knowledge graph node is then learned by combining the whole semantic graph with the node with relation to an attention weighting. This joint representation is then used to predict the final answer. Moving away from the multi-modal approach, SGEITL [104] makes a semantic graph of the image and then follows Yang _et al_. [40] and introduces skip edges to the graph, essentially making it a complete graph. This graph then goes through a multi-hop graph Transformer, which masks the attention between nodes based on their distance, ensuring that only close by nodes are attended to. Through their work, they demonstrate that structural information is useful when approaching the complex VQA task. With their TRiG model, Gao _et al_. [105] advocate taking an alternative approach to FVQA and rather than generating the answer in some multi-modal space, they propose to use the textual space. They argue that this prevents further fusion with additional outside knowledge, and that as most of this data are in textual form, it makes sense to work in that domain. TRiG therefore has three components. It first converts the image into a caption using an off-the-shelf image captioning tool. The model then finds the top \(K\) relevant facts from a knowledge base of Wikipedia articles before using a T5 backboned Transformer [106] to fuse and decode the \(<\)question, visual context, knowledge\(>\) triplet into an answer. ### _Text VQA_ TextVQA is the sub-task of VQA where the answers require the model to be able to read text that appears in images. Typically this involves tasks like reading brand names from buildings or the title of book covers. This information can then be combined with an external knowledge base, enabling the models to answer questions such as "Is the shop an American brand?" by reading the shop name and searching it in a knowledge base. Gao _et al._[107] focus on the in-image text and how it can be better leveraged to improve VQA. They use a novel multi-modal graph made up of fully connected visual, semantic, and numeric subgraphs. Each subgraph represents a unique modality that can be found in an image: visual entities (represented by image feature extractors), semantic meaning of discovered text (initially discovered by OCR), along with numeric values and their semantic meaning. The paper proposed a model that aggregates information across modalities together using a relevance score. Once the three modalities have been aggregated, an attention mechanism is deployed to help predict the final answer. The focus on different modalities proves a useful approach, with the model performing favourably in benchmarks (see Table VI). Another work that makes use of multi-modal graphs is Liang _et al._[108]. Their work uses both image features and scene text features (extracted by OCR) to generate a spatial relationship graph similar to that of [25]. The graph undergoes multi-head attention before being processed by a GNN that makes use of the attention weights. Multi-modal fusion is then used to join the node features with the question embedding and positional features. The output of this fusion operation is then used to predict a final answer. ## VI Image Retrieval Image retrieval is the task of finding images from a database given some query. These queries can take many forms, including a similar image, a natural language query, or even a sketch. A common approach is to represent the database images as being in some space, where similar images are those with a minimal distance to the query. When this space is represented using graphs, GNNs become valuable for sharing features and acquiring more global context for the features. Johnson _et al_. [24] show that a scene graph can be used as the input of the image retrieval system. By allowing end users to create a scene graph where nodes represent objects, attributes, and relationships, they are able to return appropriate images via a scene graph grounding process. This involves matching each scene graph object node with a bounding box predicted by an object detector, and is represented probabilistically using a conditional random field (CRF). The advantage of using scene graphs as search queries over natural language is that they scale well in terms of complexity. Once a basic scene graph has been constructed, it is straightforward for it to be extended and made more complex by adding additional nodes. Another advantage is that it reduces the operations required to map the search query to the image. Following on from [24], Yoon _et al._ propose IRSGS (Image Retrieval with Scene Graph Similarity) [56], which makes use of a semantic graph, referred to as a scene graph in the paper. Fig. 4: The MUCKO architecture [100] (reused with permission). Best viewed in colour. Given a query image, the model will generate a semantic graph and compare its similarity with graphs of images in the database. This graph comparison is achieved by taking the inner product of graph embeddings generated by a GNN (either GCN [109] or GIN[110]). One key contribution of the paper is the concept of Surrogate Relevance, which is the similarity between the captions of the images being compared. Surrogate Relevance is calculated using the inner product between Sentence-BERT embeddings of the captions. This measure is used as the training signal of the model to hone the feature embeddings generated by the GNN. The graph-to-graph comparison behind the model allows this work to better scale to large image databases when compared to [24]. The use of Surrogate Relevance allows the work to be potentially expanded to match against user queries if they are in the style of the captions used to power the relevance measure. Using a \(K\)-nearest neighbour graph of images represented as feature embeddings, Liu _et al_. [71] propose using a GCN alongside a novel loss function based on image similarity. The feature embeddings are enhanced to account for a global context across the whole image database using a GCN. Similarity between images is calculated by taking the inner product of the feature embeddings. The higher the similarity, the better the retrieval candidate. The author's novel loss function is designed to move similar images closer together in the embedding space and dissimilar images further apart. Compared with [56], by using the inner product, the similarity measure is far more deterministic. However, unlike [56], it cannot be expanded to work alongside text-based image retrieval with a user query. Zhang _et al_. [111] also use a \(K\)-nearest neighbour graph, but focus on improving the re-ranking process in content based image retrieval. A GNN is applied to aggregate features created from a modified adjacency matrix. Using a GNN allows the re-ranking process to de-emphasise nodes with a low confidence score. Rather than use a pure \(K\)-nearest neighbour graph, the DGCQ model [112] is based on vector quantisation, a process from Information Theory for reducing the cardinality of a vector space. It can essentially be thought of as a many-to-one clustering technique where vectors in one space \(x\in\mathbb{R}^{d}\) are mapped to a set of code words (\(c_{i}\)) that make up a code book \(q(x)\in\mathcal{C}=\{c_{i};i\in\mathcal{I}\}\). Where \(\mathcal{I}=1...(k-1)\). By using vector quantisation, the model learns code words that can be combined with image features to form landmark graph. This graph is based on the similarity graph except it also has nodes learned through the quantisation process. Once the landmark graph has been constructed, a GCN is use to propagate features with the objective of moving similar images closer together in the feature space. The use of vector quantisation allows for the landmark graph to exist in a lower dimensional space, reducing computation when computing which images from the graph to return as candidates. The authors of [57] move to adopt a multi-modal approach. They use GraphSAGE [77] to effectively learn multi-modal node embeddings containing visual and conceptual information from the connections in the graph. The distance between connected nodes are reduced, whilst the distance between disconnected nodes is increased. By using graph nodes that represent images as well as nodes representing metadata tags, their model is able to provide content-based image retrieval as well as tag prediction. At inference time, images shown to the model can be attached to the graph through their \(K\) nearest images, attached to relevant tags, or both. Unlike previous works [71, 56, 24], Misraa _et al_. [57] make use of multi-modal embeddings in the graph nodes. Schuster _et al_. [63] continue the work of Johnson _et al_. [24], by creating a natural language parser that converts a query into a scene graph that can be processed by their work. This allows them to go beyond content-based image retrieval and move into text-based image retrieval. Their parser works by creating a dependency tree using the Stanford Dependency Parser [64] and then modifying the tree. They first execute a quantification modifier that ensures nouns are the head of the phrase. This is followed by pronoun resolution to make the relationship between two objects more explicit. Finally, plural nouns are processed. This involves copying noun instances when numeric modifiers are given. This textual scene graph is then mapped to images following [24]. Cui _et al_. [55] also tackle text-based image retrieval. They present work that makes use of a GCN to provide cross-modal reasoning on visual and textual information. Input features are split into channels which form a complete graph and undergo graph convolution. Once the textual and visual features are projected into a common space, they have their distances measured using the cosine similarity. These similarity scores are then stored in a matrix representing the similarities between visual and textual inputs. Zhang _et al_. [113] tackle the challenging task of Composing Text and Image to Image Retrieval, where given a reference image and modification query the image retrieval system must find an image similar to the reference that contains the modifications outlined in the query. The principle challenge of this emerging task is its cross-modality nature. The authors tackle this challenge by first generating a spatial graph of the reference image and a textual feature of the modification query. These features are then concatenated before the graph is processed by a GAT whose attention mechanism has been altered to account for the directionality of the graph and the spatial data it encodes. A collection of GRUs that form a Global Semantic Reasoning (GSR) unit are then used to create the final embedding for the reference image. The same process is used on the target image but without the concatenation of the textual feature. A cross-modal loss function and adversarial loss function are combined to ensure that the features outputted by the GSR of the same category are moved closer together. Chaudhuri _et al_. [73] adopt a Siamese-based network architecture where two similar inputs go into two separate networks that share weights. This network architecture typically uses contrastive loss or triplet loss to ensure the outputs of these networks are similar. The authors use a novel Siamese-GCN on a region adjacency graph that is formed by connecting adjacent segmented regions and weighting the edge accounting for the distance and angle between centroids of the regions. They apply their technique to high resolution remote sensing images for content-based image retrieval. By using a Siamese-GCN with contrastive loss, the authors are able to learn an embedding that brings similar images together and forces dissimilar images apart. This work is then followed up by the authors in [114], where they add a range of attention mechanisms. They implement both node-level and edge-level attention mechanisms (in a similar style to GAT [80]). These attention mechanisms are then incorporated into the Siamese-GCN to yield improvements over their previous work. Another work to incorporate a siamese network design was Zhang _et al._[115]. They use a three part network design to perform zero-shot sketch-based image retrieval with a Siamese-based encoding network which creates features of the image and associated sketch using ResNet50. These features are the concatenated together to create node features. The similarity between nodes is calculated using a metric function modelled by an MLP, and this operation is used to populate the adjacency matrix of a similarity graph. A GCN is then applied to the similarity graph to create fusion embeddings of sketch-image pairs. Rather than use an MLP to reconstruct the semantic information from the GCN embeddings, the authors chose to use a Conditional Variational Autoencoder [116]. Doing so enables the model to generate semantic information for sketches of unseen classes, adding the zero-shot component of the model. ## VII Discussion and Conclusion In this section, we draw upon the views of Battaglia _et al._[27], and discuss how the popular Transformer [26] can be viewed through the lens of GNNs. We then discuss how its dependence on consistent structure may pose challenges should image generation techniques be applied to create new training data for image captioning. The section concludes with a final summary of the paper and an overview of the challenges and future research directions that lie ahead for graph-based 2D image understanding. ### _Why GNNs When We Have Transformers?_ Recent years have seen the rapid rise in popularity of the Transformer architecture [26]. Originally proposed in the Natural Language Processing domain, it was quickly applied as a generalised encoder in computer vision tasks [46] Further work then expanded the architecture so that it can process images directly [117, 118], allowing it to operate as a backbone for common vision tasks. The wide range of applications the architecture can be applied to has led to it dominating much of deep learning in recent years. There has been some effort by the community to unify the attention-based approach with GNNs. Battaglia _et al._[27] proposes a more generic Graph Network which both Transformers and GNNs fall into. They present a viewpoint where Transformers can be viewed as a neural architecture operating on a complete graph. Viewing GNNs and Transformers as Graph Networks shows that they share a number of similarities. Both architectures take a set of values and decide how much different values should be considered when transforming them to update the values, with GNNs ignoring nodes that are not connected and Transformers scaling the importance of an input. It is worth noting that if the graph being processed by a GNN is a complete graph, the graph network will allow all nodes to have their messages propagated to one being updated. Therefore, it is possible to view the Transformer as a special case GNN operating on a complete graph. While GNNs use the read module to take advantage of an underlying structure, the Transformer learns one based on the task. By applying a Transformer to a task, a graph structure is being learnt from scratch. Meanwhile, there are plenty of graph structures that appear naturally within vision-language tasks. This multitude of graph types allow for different structures to be taken for the image, from the semantic structure of an image to the hierarchical structure of the image with regards to the entire training set. Graphs appear naturally in the language component of the tasks as well, with sentence dependency trees being closely aligned to semantic scene graphs (when the scene graph is made multipartite as in the case of [61]). When clear graph representations of data exist, they should be utilised rather than ignored, rather than learning a graph structure using a more general purpose architecture. Utilising existing graph structures enables a Graph Network with the appropriate inductive biases to be deployed. It also results in fewer computations as messages are not being passed between all possible node connections. When it is possible to utilise multiple graphs, it is advantageous to do so when compared to using a single graph. As shown with image captioning (Table III), architectures that only use a single graph type perform sub-optimally compared to their multigraph counterparts. ARL [94], Sub-GC [60], and Topic [69] all use a single graph (spatial, semantic, similarity respectively) and all three suffer in benchmarks. Whilst Topic performs well in BLEU, METEOR, and ROGUE, when evaluated using metrics designed specifically for image captioning (SPICE and CIDEr) its performance filters against comparable models. This theme of multigraph approaches performing more favourably is also found across the VQA, FVQA, and text-VQA tasks, with multigraph approaches outperforming their single graph counterparts. ### _Latent Diffusion and the Future of Image Captioning_ Currently, image captioning techniques are constrained by their training data. As popular as COCO is within the Computer Vision community for its wide ranging scenes and generalisability to the real world, it has its shortcomings. Captioning systems trained on it alone will never understand particular art styles, or objects outside of the 80 categories covered by the COCO dataset. The advent of image generation techniques such as DALLE:2 [119] present an opportunity for image captioning systems to go well beyond an 80 category limit and start understanding various stylistic elements of images. Work in this area is in its infancy [120, 121], but previous non-generative unsupervised approaches to image captioning are very promising [18]. We speculate that latent diffusion-based captioning may be a promising avenue of research. However, for this approach to work effectively, image generation techniques will need to develop further. Currently DALLE:2 [119] and similar systems do not understand structure as deeply as would be required for them to be able to replace the training data of a captioning system. As impressive as they are, they can struggle to assemble images correctly when the prompt asks for something that is unlikely in real life. When asked to generate an image of "A monkey riding on the back of a polar bear", DALLE:2 [119] can sometimes struggle to _understand_ the requested spatial relation between the two animals, resulting in the sample result shown in Figure 5. Discovering examples of incorrect relationships in images is not just a case of dreaming up relationships between objects that are unlikely to exist in training data. Conwell and Ullman [122] conducted a participant study where they asked \(169\) people to select generated images that they felt well matched a given prompt. They found that across the generated images in their study, only 22% matched the original prompt. The authors conclude that _"current image generation models do not yet have a grasp of even basic relations involving simple objects and agents"_[122]. Whilst latent diffusion methods may play a role in the future of image captioning, they have a long way to go understanding structure before this is possible. In order for Graph Networks [27] to be applicable to diffusion generated training data, the structure within the image and the caption/prompt will need to be consistent. Supervised learning approaches require large amounts of very clean training data in order to work well, so Graph Networks [27] may struggle if the underlying structure in the image data is not as expected. ### _Final Notes_ Vision-language tasks such as image captioning and VQA pose significant opportunities for accessibility technology to be developed for those with sight impairment or severe sight impairment. Having widespread automatic alt-text generation on websites and applications enabling queries about images shared online, there is substantial impact that research in these fields can have. However, models trained on current datasets are prone to the biases of sighted humans. The questions asked in VQA datasets, and the captions given in image captioning datasets do not necessarily cater to the needs of possible end users of this technology. A lot is said in the field of the technology being applied to aid those with various levels of sight impairment, but little action is actually taken. Whilst the release of trained models is promising, making these models available outside of the research community would be beneficial. Another direction the community could take towards using this research to aid those with forms of sight impairment would be to curate a dataset of images with questions posed by those we seek to help, i.e., people with sight impairment. This dataset could also include captions that focus on aspects of an image deemed to be important to those with sight impairment. The inclusion of these captions would yield models that generate captions that prioritise the information required by someone with sight impairment rather than trying to mirror the style of captions generated by sighted humans as is the case with models trained on existing image captioning datasets COCO [33] or Flickr30k [34]. The state of the art (SOTA) in vision-language tasks is currently dominated by large Transformer-based models developed by industrial labs [123, 124, 125]. This makes comparing these models to those discussed in this paper difficult given the model size and compute power used for training. However, there are a few take home points. In the case of image captioning, the Transformer-based model \(\mathcal{M}2\) is outperformed by GNN-based architectures, namely Dual-GCN [70]. This leads the authors to posit that there is a strong inductive bias in using imposed graph structures rather than allowing all relationships between detected Fig. 5: One of the images generated by OpenAI’s DALLE:2 [119] given the prompt _’A monkey riding on the back of a polar bear’_. Note the inverted relationship in the generated image. Best viewed in colour. objects to be processed using self-attention. The use of a global context graph (taking into account the whole dataset) alongside a local context graph (image level relationships) by Dual-GCN [70] is shown to work extremely well and this dual graph approach could be the seed for future works. It could be that given the scale of the models currently achieving SOTA that there are some emergent properties that develop in these models when they achieve such as scale. Future work should consider scaling graph-based architectures, such as those discussed in this survey, to the scale of the large models being produced by industry labs. For FVQA and image retrieval, the graph-based approaches have stronger inductive biases for the reasoning stages of the tasks. Both tasks require the processing of graph data (in the case of a knowledge graph in FVQ or some graph representation of the search space in image retrieval). It is well documented that Transformers do not perform well on sparse graphs (such as knowledge graphs) or large graphs (such as those used in image retrieval). The adoption of GNN-based image captioning techniques has proven promising. Given that this approach is relatively new, there is ample opportunity for further research to be carried out in this field. As shown in Section IV the majority of image captioning techniques make use of either GCN or GGNN architectures. As GNNs develop and newer more expressive techniques are approached, the community should move to adopt these over traditional message passing style networks. Models such as GAT [80] may provide advantages over the techniques being used as they incorporate self-attention mechanisms into the architecture, a technique proven to yield impressive results given the popularity of the Transformer. All the GNNs being used in the vision-language tasks discussed in the survey are built on the concept of homophily, i.e., similar nodes are connected by an edge. This is not always the case though given that a semantic graph connects dissimilar objects that are semantically related. Some of the graphs detailed are homophilic (e.g., image graph), but many others are not. This leads us to speculate that there are ample research opportunities for applying GNN architectures that respect the amount of homophily or heterophily of the graph being processed. Another direction of research would be investigating combinations of different graph representations (both at the image level and dataset level) to identify combinations that work well together. Using different graph representations will allow for better utilisation of both local and global features. The incorporation of outside knowledge into image captioning could provide an interesting research direction. It is often pointed out that image captioning is a useful accessibility technology for those with sight impairment. However, this assumes the user is an adult with a developed understanding of the world. Image captioning systems may struggle to be applied in a paediatric accessibility setting. Having the model explain the world in greater detail may be of use. Another potential future research direction would be the unification of the three tasks discussed in this paper. Developing a single unified model that could perform competently in all three would hail an important breakthrough. In order to perform this, a model would have to have a common intermediary space for which it could map between the text and image spaces. We posit that this space would most likely be graph-based due to their expressive nature. However, a textual representation may also be performant as Gao _et al._[105] showed reasoning in the text space improved performance over graph-based reasoning in VQA. In summary, vision-language tasks such as those discussed in this paper are set to have a fruitful future, with many opportunities for various graph structures to be exploited. ## Acknowledgment The authors would like to thank colleagues in DERI for their thoughtful and insightful feedback as we developed some of the ideas presented in this paper.
2307.00947
A hybrid finite element/neural network solver and its application to the Poisson problem
We analyze a hybrid method that enriches coarse grid finite element solutions with fine scale fluctuations obtained from a neural network. The idea stems from the Deep Neural Network Multigrid Solver (DNN-MG), (Margenberg et al., J Comput Phys 460:110983, 2022; A neural network multigrid solver for the Navier-Stokes equations) which embeds a neural network into a multigrid hierarchy by solving coarse grid levels directly and predicting the corrections on fine grid levels locally (e.g. on small patches that consist of several cells) by a neural network. Such local designs are quite appealing, as they allow a very good generalizability. In this work, we formalize the method and describe main components of the a-priori error analysis. Moreover, we numerically investigate how the size of training set affects the solution quality.
Uladzislau Kapustsin, Utku Kaya, Thomas Richter
2023-07-03T11:43:32Z
http://arxiv.org/abs/2307.00947v1
# A hybrid finite element/neural network solver and its application to the Poisson problem ###### Abstract We analyze a hybrid method that enriches coarse grid finite element solutions with fine scale fluctuations obtained from a neural network. The idea stems from the _Deep Neural Network Multigrid Solver_ (DNN-MG) [1] which embeds a neural network into a multigrid hierarchy by solving coarse grid levels directly and predicting the corrections on fine grid levels locally (e.g. on small patches that consist of several cells) by a neural network. Such local designs are quite appealing, as they allow a very good generalizability. In this work, we formalize the method and describe main components of the a-priori error analysis. Moreover, we numerically investigate how the size of training set affects the solution quality. ## 1 Introduction Recent advancements in employing neural networks to approximate solutions to partial differential equations (PDEs) mostly focus on Physics Inspired Neural Networks (PINNs) [2] such as the Deep Ritz method [3]. They leverage the expressive power of neural networks while incorporating physical principles and promise substantial efficiency increase for high dimensional or parameter dependent partial differential equations. One main drawback of PINNs is that they need to re-train when the problem parameters change. Also, for classical problems, such as three dimensional fluid dynamics problems, highly sophisticated and well established discretization methods regarding the efficiency and accuracy are available that beat neural network approaches by far. The method of this paper was introduced as main component the of DNN-MG [1] for the instationary Navier-Stokes equations. At each time step, a coarse solution is obtained by a classical finite element solver and corrections to finer grids are predicted locally via neural networks. Here, we focus on a simpler linear problem and aim to understand the mechanism of such a hybrid approaches by discussing its a-priori errors and via numerical experiments. Let \(\Omega\subset\mathbb{R}^{d},\ d\in\{2,3\}\) be a domain with polygonal boundary. We are interested in the weak solution of the Poisson's equation \[-\Delta u=f,\quad u|_{\partial\Omega}=0, \tag{1}\] with a given force term \(f\in H^{-1}(\Omega)\). For a subdomain \(\omega\subseteq\Omega\), let \(\mathcal{T}_{h}(\omega)=\{T_{i}\}_{i=1}^{M}\) be a non-overlapping admissible decomposition of \(\omega\) into convex polyhedral elements \(T_{i}\) such that \(\overline{\omega}=\cup_{i=1}^{M}\overline{T}_{i}\). The diameter of element \(T\) is denoted by \(h_{T}\) and \(h=\max_{T\in\mathcal{T}_{h}(\Omega)}h_{T}\). With \(\|\cdot\|_{2}\) we denote the Euclidean norm and for \(v\in C(\overline{\omega})\) we define \[\|v\|_{l^{2}(\omega)}:=\Big{(}\sum_{x\text{ is node of }\mathcal{T}_{h}( \omega)}v(x)\Big{)}^{\frac{1}{2}}.\] Moreover, let \(V_{h}^{(r)}\) be the space of piecewise polynomials of degree \(r\geq 1\) satisfying the homogeneous Dirichlet condition on the boundary \(\partial\Omega\), i.e. \[V_{h}:=\left\{\phi\in C(\overline{\Omega})\text{ s.t. }\phi|_{T}\in P^{(r)}(T)\; \forall T\in\Omega_{h},\ \phi|_{\partial\Omega}=0\right\},\] where \(P^{(r)}(T)\) is the space of polynomials of degree \(r\) on a cell \(T\in\mathcal{T}_{h}\). We assume that there is a hierarchy of meshes \[\mathcal{T}_{H}(\Omega):=\mathcal{T}_{0}\preccurlyeq\mathcal{T}_{1}\prec \cdots\preccurlyeq\mathcal{T}_{L}:=\mathcal{T}_{h}(\Omega),\] where we denote by \(\mathcal{T}_{l-1}\preccurlyeq\mathcal{T}_{l}\), that each element of the fine mesh \(T\in\mathcal{T}_{l}\) originates from the uniform refinement of a coarse element \(T^{\prime}\in\mathcal{T}_{l-1}\), for instance, uniform splitting of a quadrilateral or triangular element into four and of a hexahedral or tetrahedral element into eight smaller ones, respectively. Accordingly we have the nesting \(V_{h}^{(l-1)}\subset V_{h}^{(l)},\quad l=1,\dots,L\) where \(V_{h}^{(l)}\) is the space defined on the mesh level \(l\). With a patch \(\mathcal{P}\in\mathcal{T}_{h}(\Omega)\) we refer to a polyhedral subdomain of \(\Omega\), but for simplicity we assume that each patch corresponds to a cell of \(\mathcal{T}_{H}(\Omega)\). By \(V_{h}(\mathcal{P})\) we denote the local finite element subspace \[V_{h}(\mathcal{P}):=\mathrm{span}\left\{\phi_{h}|_{\mathcal{P}},\ \phi_{h}\in V_{h}\right\}\] and \(R_{\mathcal{P}}:V_{h}\to V_{\mathcal{P}}\) denotes the restriction to the local patch space, defined via \[R_{\mathcal{P}}(u_{h})(x_{i})=u_{h}(x_{i})\quad\text{for each node $x_{i}\in \mathcal{T}_{h}(\mathcal{P})$.}\] The prolongation \(P_{\mathcal{P}}:V_{h}(\mathcal{P})\to V_{h}\) is defined by \[P_{\mathcal{P}}(v)(x)=\begin{cases}\frac{1}{n}v(x)&x\text{ is a node of $\mathcal{T}_{h}(\mathcal{P})$},\qquad n\in\mathbb{N}\text{ being the number of patches containing the node $x$}\\ 0&\text{otherwise.}\end{cases} \tag{2}\] The classical continuous Galerkin finite element solution of the problem (1) is \(u_{h}\in V_{h}\) s.t. \[(\nabla u_{h},\nabla\phi)=(f,\phi)\quad\forall\phi\in V_{h}, \tag{3}\] with the \(L^{2}\) inner product \((\cdot,\cdot)\). We are interested in the scenario where one prefers not to solve (3) on the finest level \(V_{h}\) due to lacking hardware resources or too long computational times, but in \(V_{H}\) with \(H\gg h\). This is the so-called coarse solution \(u_{H}\in V_{H}\) and fulfills \((\nabla u_{H},\nabla\phi)=(f,\phi)\quad\forall\phi\in V_{H}\). The key idea of our method is to obtain the fine mesh fluctuations \(u_{h}-u_{H}\) in forms of neural network updates \(w_{\mathcal{N}}\) corresponding to the inputs \(u_{H}\) and \(f\). Hence, the neural network updated solution has the form \(u_{\mathcal{N}}:=u_{H}+w_{\mathcal{N}}\) in the case where the network operates globally on the whole domain. A more appealing setting is where these updates are obtained locally, such that the network is acting on the data not on the whole domain at once, but on small patches \(\mathcal{P}\in\mathcal{T}_{h}(\Omega)\). In this case, while the training is performed in a global manner, the updates are patch-wise and the network updated solution has the form \(u_{\mathcal{N}}:=u_{H}+\sum_{\mathcal{P}}P_{\mathcal{P}}w_{\mathcal{N}}^{ \mathcal{P}}\). ## 2 Hybrid finite element neural network discretization ### Neural network In this section we introduce the neural network we use and formalize the definition of finite element/neural network solution. **Definition 1** (_Multilayer perceptron_).: _Let \(L\in\mathbb{N}\) be the number of layers and let \(N_{i}\) be the number of neurons on layer \(i\in\{1,\dots,L\}\). Each layer \(i\in\{1,\dots,L-1\}\) is associated with a nonlinear function \(l_{i}(x):\mathbb{R}^{N_{i-1}}\rightarrow\mathbb{R}^{N_{i}}\) with_ \[l_{i}(x)=\sigma(W_{i}x+b_{i}) \tag{4}\] _and an activation function \(\sigma:\mathbb{R}\rightarrow\mathbb{R}\). The multilayer perceptron (MLP) \(\mathcal{N}:\mathbb{R}^{N_{0}}\rightarrow\mathbb{R}^{N_{L}}\) is defined via_ \[\mathcal{N}=W_{n}(l_{n-1}\circ\cdots\circ l_{1})(x)+b_{n}\] _where \(W_{i}\in\mathbb{R}^{N_{i-1}\times N_{i}}\) denote the weights, and \(b_{i}\in\mathbb{R}^{N_{i}}\) the biases._ ### Hybrid solution On a patch \(\mathcal{P}\), the network receives a tuple \((R_{\mathcal{P}}u_{H},R_{\mathcal{P}}f)\), restrictions of the coarse solution \(R_{\mathcal{P}}u_{H}\) and of the source term \(R_{\mathcal{P}}f\) and it returns an approximation to the fine-scale update \(v_{h}^{\mathcal{P}}(u_{h}-u_{H})|_{\mathcal{P}}\in V_{h}(\mathcal{P})\). In order to obtain a globally continuous function, the prolongation (2) is employed. **Definition 2** (_Hybrid solution_).: _The hybrid solution is defined as_ \[u_{\mathcal{N}}:=u_{H}+\sum_{\mathcal{P}}P_{\mathcal{P}}w_{N}^{\mathcal{P}}, \tag{5}\] _where \(w_{N}^{\mathcal{P}}=\sum_{i=1}^{N}W_{i}^{P}\phi_{i},\)\(W_{i}^{P}\) is the \(i-\)th output of \(\mathcal{N}(y)\) and \(\phi_{i}\) are the basis functions of \(V_{h}(\mathcal{P})\). Here, \(y=\left(U_{H}^{\mathcal{P}},F_{h}^{\mathcal{P}}\right)^{T}\) is the input vector where \(U_{H}^{\mathcal{P}}\) and \(F_{h}^{\mathcal{P}}\) are the nodal values of \(u_{H}\) on the coarse mesh \(\mathcal{T}_{H}(\Omega)\) and \(f\) on the mesh \(\mathcal{T}_{h}(\mathcal{P})\), respectively._ For simplicity we will mostly use the notation \[u_{\mathcal{N}}=u_{H}+\mathcal{N}(f)\] in place of (5). Since each function \(u_{H}\in V_{H}\) also belongs to \(V_{h}\), it has the form \(u_{H}=\sum\limits_{i=1}^{N_{def}}U_{Hh}^{i}\phi_{h}^{i}\) with \(\{\phi_{h}^{i}\}_{i=1}^{N_{def}}\) being the basis of the fine finite element space \(V_{h}\) and \(U_{Hh}\) being the coefficient vector of interpolation of \(u_{H}\) into \(V_{h}\). As we update the coarse solution \(u_{H}\) on fine mesh nodes, this procedure can be considered as a simple update of coefficients \(U_{Hh}^{i}\), i.e. \[u_{\mathcal{N}}=\sum_{i=1}^{N_{def}}(U_{Hh}^{i}+W_{\mathcal{N}}^{i})\phi_{h}^{ i}\in V_{h},\] or simply \(U_{\mathcal{N}}:=U_{Hh}+W_{\mathcal{N}}\) being the coefficient vector of \(u_{\mathcal{N}}\). ### Training The neural network is trained using fine finite element solutions obtained on the mesh \(\mathcal{T}_{h}(\Omega)\) and with the loss function \[\mathcal{L}(u_{h},u_{H};w_{h}):=\frac{1}{N_{T}N_{P}}\sum_{i=1}^{N_{T}}\sum_{ \mathcal{P}\in\Omega_{h}}\|(u_{h}^{f_{i}}-u_{H}^{f_{i}})-w_{\mathcal{N}}^{f_{ i}}\|_{l^{2}(\mathcal{P})}^{2} \tag{6}\] where \(N_{T}\) is the size of training set and \(N_{P}\) is the number of patches. Here, \(w_{\mathcal{N}}^{f_{i}}\) stands for the finite element function defined by the network update \(\mathcal{N}(f_{i})\) on the patch \(\mathcal{P}\). The training set \(\mathcal{F}=\{f_{1},\dots,f_{N_{tr}}\}\) consists of \(N_{tr}\in\mathbb{N}\) source terms \(f_{i}\) together with corresponding coarse and fine mesh finite element solutions \(u_{H}^{f_{i}}\) and \(u_{h}^{f_{i}}\), respectively. ## 3 On the a-priori error analysis The difference between the exact solution \(u\in H_{0}^{1}(\Omega)\) of (1) and the hybrid solution \(u_{\mathcal{N}}\) from (5) can be split as \[\|u-u_{\mathcal{N}}\|\leq\min_{f_{i}\in\mathcal{F}}\Big{(}\|u-u_{h}\|+\|u_{h}- u_{h}^{f_{i}}\|+\|u_{h}^{f_{i}}-u_{\mathcal{N}}^{f_{i}}\|+\|u_{\mathcal{N}}^{f_{ i}}-u_{\mathcal{N}}\|\Big{)}, \tag{7}\] \(u_{h}^{f_{i}},u_{\mathcal{N}}^{f_{i}}\in V_{h}\) being the finite element solution and the neural network updated solution corresponding to the source term \(f_{i}\), respectively. Let us discuss individual terms in (7). * \(u-u_{h}\) is the fine mesh finite element error. Estimates of this error are well-known in the literature and are of \(\mathcal{O}(h^{r})\) in the \(H^{1}\) semi-norm. * \((u_{h}-u_{h}^{f_{i}})\) is a _data approximation error_ and in the \(H^{1}\) semi-norm it can be bounded by \(\|f-f_{i}\|_{-1}\) due to through stability of the finite element method. * \((u_{h}^{f_{i}}-u_{\mathcal{N}}^{f_{i}})\) is a _network approximation error_ and is introduced by the approximation properties of the network architecture. This is bounded by the tolerance \(\epsilon\) which depends on the accuracy the minimization problem (6). * \(u_{\mathcal{N}}^{f_{i}}-u_{\mathcal{N}}=(u_{H}-u_{H}^{f_{i}})+(\mathcal{N}(f) -\mathcal{N}(f_{i}))\) consists of a _generalization error_ of the network and a further error term depending on the richness of the data set. While the term \(u_{H}^{f_{i}}-u_{H}\) can be handled via the stability of the finite element method, the remaining term requires a stability estimate of the neural network. Overall, an estimate of \[\|\nabla(u-u_{\mathcal{N}})\|\leq c\Big{(}h^{r}\|f\|_{r+1}+\epsilon+\min_{f_{ i}\in\mathcal{F}}\big{\{}\|f-f_{i}\|_{-1}+\|\nabla(\mathcal{N}(f)-\mathcal{N}(f_{ i}))\|\big{\}}\Big{)} \tag{8}\] can be obtained for sufficiently smooth source term \(f\) and domain \(\Omega\). Improvements of this estimate with the consideration of patch-wise updates is part of an ongoing work. ### Stability of the neural network The network dependent term of (8) is linked with the stability of the network. For a study of the importance of Lipschitz regularity in the generalization bounds we refer to [4]. **Lemma 1**.: _Let \(\mathcal{N}\) be a multilayer perceptron (Def. 1) and \(\sigma:\mathbb{R}\to\mathbb{R}\) satisfy \(|\sigma(y)-\sigma(y_{i})|\leq c_{0}|y-y_{i}|\) with \(c_{0}>0\). Then, on each patch \(\mathcal{P}\) for the inputs \(y\) and \(y_{i}\) and the corresponding FE functions \(\mathcal{N}(f)\) and \(\mathcal{N}(f_{i})\) (uniquely defined by the network updates) holds_ \[\|\mathcal{N}(f)-\mathcal{N}(f_{i})\|_{\mathcal{P}}\leq c\cdot c_{0}^{N_{L}} \cdot c_{W}\cdot h^{d}\|y-y^{f_{i}}\|_{2} \tag{9}\] _where_ \[c_{W}:=\prod_{j=1}^{N_{L}}\|W^{j}\|_{2}.\] Proof.: The definition of the network gives \[\|\mathcal{N}(f)-\mathcal{N}(f_{i})\|_{\mathcal{I}^{2}(\mathcal{P})}=\|W^{N_{ L}}(z_{N_{L}-1}(y)-z_{N_{L}-1}(y^{f_{i}}))\|_{2}\leq\|W^{N_{L}}\|_{2}\cdot\|z_{N_{L }-1}(y)-z_{N_{L}-1}(y^{f_{i}})\|_{2} \tag{10}\] where \(z_{i}=l_{i}\circ\cdots\circ l_{1}\) and \(l_{i}\) are as defined in (4). By using the definition of \(z_{j}\) and the Lipschitz constant of \(\sigma(\cdot)\) we obtain for an arbitrary layer \(j\) \[\begin{split}\|z_{j}(y)-z_{j}(y^{f_{i}})\|_{2}&=\| \sigma(W^{j}z_{j-1}(y))-\sigma(W^{j}z_{j-1}(y^{f_{i}}))\|_{2}\leq c_{0}\|W^{j} \left(z_{j-1}(y)-z_{j-1}(y^{f_{i}})\right)\|_{2}\\ &\leq c_{0}\|W^{j}\|_{2}\cdot\|z_{j-1}(y)-z_{j-1}(y^{f_{i}})\|_{2 }.\end{split} \tag{11}\] Then, by applying (11) recursively from the second to the last layer we obtain \[\|z_{N_{L}-1}(y)-z_{N_{L}-1}(y^{f_{i}})\|_{2}\leq c_{0}^{N_{L}-1}\prod_{i=1}^ {N_{L}-1}\|W_{j}\|_{2}\cdot\|y-y^{f_{i}}\|_{2}\] Hence, by applying it to (10) and using the inequality \[\|v\|_{\mathcal{P}}^{2}\ \leq ch^{2d}\|v\|_{l^{2}(\mathcal{P})}^{2}\ \ \ \ \forall v\in V_{h}(\mathcal{P})\] we arrive at the claim. **Corollary 1**.: _Lemma 1 leads to_ \[\|\nabla(\mathcal{N}(f)-\mathcal{N}(f_{i}))\|\leq c_{inv}c_{1}\Big{(}c_{\Omega }h^{-1}\|f-f_{i}\|_{-1}+h^{d}\sum_{\mathcal{P}}\|f-f_{i}\|_{l^{2}(\mathcal{P} )}\Big{)}\] _with the constant \(c_{1}=c\cdot c_{0}^{N_{L}}\cdot c_{W}\) arising from Lemma above and \(c_{inv}\) and \(c_{\Omega}\) arising from inverse and Poincare estimates, respectively._ Proof.: The definition of inputs together with the triangle inequality and the inequality \[\|v\|_{l^{2}(\mathcal{P})}^{2}\ \leq\ h^{-2d}\|v\|_{\mathcal{P}}^{2}\ \ \ \forall v\in V_{h}(\mathcal{P})\] provides \[\|y-y^{f_{i}}\|_{2}\leq\|u_{H}-u_{H}^{f_{i}}\|_{l^{2}(\mathcal{P})}+\|f-f_{i} \|_{l^{2}(\mathcal{P})}\leq h^{-d}\|u_{H}-u_{H}^{f_{i}}\|_{\mathcal{P}}+\|f-f_ {i}\|_{l^{2}(\mathcal{P})}\] for each patch \(\mathcal{P}\). In the whole domain this, with Poincares inequality, leads to \[\|\mathcal{N}(f)-\mathcal{N}(f_{i})\|\leq c_{1}\big{(}c_{\Omega}h^{-1}\| \nabla(u_{H}-u_{H}^{f_{i}})\|+h^{d}\sum_{\mathcal{P}}\|f-f_{i}\|_{l^{2}( \mathcal{P})}\big{)}.\] The stability of the coarse discrete solution and the inverse estimate shows the claim. **Remark 1**.: _A different network architecture may include several layers that perform convolutions. This kind of networks are called convolutional neural networks. In the two-dimensional setting, this would correspond to replacing \(l_{i}\) of Definition 1 with a nonlinear function \(l_{i}^{c}:\mathbb{R}^{N_{i}^{c}\times N_{i}^{c}}\to\mathbb{R}^{N_{i+1}^{c} \times N_{i+1}^{c}}\) defined as_ \[l_{i}^{c}(x)=\sigma(W_{i}*x+b_{i})\] _with \(W_{i}\in\mathbb{R}^{N_{i}^{c}\times N_{i}^{c}}\) and \(b_{i}\in\mathbb{R}^{N_{i+1}^{c}\times N_{i+1}^{c}}\) where \(*\) is the matrix convolution operator. While \(N_{i}^{*}\) stands for the dimension of the kernel \(W_{i}\) of the corresponding convolution, we assume \(N_{i}=N_{i}^{c}\cdot N_{i}^{c}\) and \(N_{i+1}=N_{i+1}^{c}\cdot N_{i+1}^{c}\). The embedding into the multilayer perceptron is usually performed with the use of \(\mathrm{reshape}_{N}\) (\(\mathbb{R}^{N^{2}}\to\mathbb{R}^{N\times N}\)) and \(\mathrm{flatten}_{N}\) (\(\mathbb{R}^{N\times N}\to\mathbb{R}^{N^{2}}\)) operators so that the dimensions of convolutional layer matches with the dense layer._ **Remark 2**.: _In a scenario where a dense layer \(j\) of MLP is replaced with a convolutional layer, equation (11) must be modified as_ \[\|z_{j}(y)-z_{j}(y^{f_{i}})\|_{F} =\|\sigma(W^{j}*z_{j-1}(y))-\sigma(W^{j}*z_{j-1}(y^{f_{i}}))\|_{F}\] \[\leq c_{0}\|W_{j}*(z_{j-1}(y)-z_{j-1}(y^{f_{i}}))\|_{F}\leq c_{0}\| W_{j}\|_{F}\cdot\|z_{j-1}(y)-z_{j-1}(y^{f_{i}})\|_{F}.\] _Hence, for a neural network with an index set of dense layers \(S_{d}\) and convolutional layers \(S_{c}\) the result (9) holds with the modified constant_ \[c_{W}=\prod_{j\in S_{d}}\|W_{j}\|_{2}\prod_{j\in S_{c}}\|W_{j}\|_{F}\] _by taking into account, that \(\|\operatorname{reshape}(\cdot)\|_{F}=\|\cdot\|_{2}\) and \(\|\operatorname{flatten}(\cdot)\|_{2}=\|\cdot\|_{F}\)._ ## 4 Numerical experiments We consider the two-dimensional Poisson equation on the unit square \(\Omega=(0,1)^{2}\) with homogeneous Dirichlet boundary conditions. The training data is picked randomly from the set of source terms \[\mathcal{F}:=\Big{\{}f(x,y)=\sum_{i=1}^{4}\alpha_{i}\sin\big{(} \beta_{i}\pi(x+C_{i})\big{)},\ C_{1},C_{2}\in[0,1],\ C_{3},C_{4}\in[0,\frac{1}{2 }],\\ \alpha_{1}=\alpha_{2}=\frac{1}{2},\ \alpha_{3}=\alpha_{4}=\frac{1}{10},\ \beta_{1}=\beta_{2}=2,\ \beta_{3}=\beta_{4}=4\Big{\}} \tag{12}\] together with the corresponding fine and coarse finite element solutions \(u_{H}\) and \(u_{h}\), respectively. We employ a multilayer perceptron as described in Definition 1 with 4 hidden layers, each with 512 neurons and \(\sigma(\cdot)=\tanh(\cdot)\) as an activation function. We train it using the Adam optimizer [5] and loss function \(\mathcal{L}\) from 6. Figure 2 shows the mean error of the proposed method w.r.t. a reference one, which is one level finer that the target one. Here we consider the error on training and testing datasets of different sizes. We also consider different refinement levels, i.e. \(h=H/2,H/4\) and \(H/8\). The \(x\)-axis corresponds to the fine step size \(h\) and the \(y\)-axis to the mean error. Here, the two topmost lines (blue) show the error of the coarse solution, which is used as an input to the neural network. The two bottom-most lines (green) show the error of the fine solution, used for the computation of the loss. The rest of the lines depict the errors of the proposed method for training data of different size. Here we observe that given enough data, one is able to get arbitrarily close to the fine solutions used for training. Figure 2 shows an example of how the loss function behaves during the training. Here we have trained a network for 400 epochs and have used learning rate decay with a factor of 0.5 every 100 epochs. Due to this one can observe significant drops in the value of loss function at 100, 200 and 300 epochs. Figure 3 shows an example of coarse, fine and network solution for a particular right hand side from the test data. Here we observe, that the quality of the network solution is significantly better than the quality of the original coarse solution. ## Acknowledgements The authors acknowledge the support of the GRK 2297 MathCoRe, funded by the Deutsche Forschungsgemeinschaft, Grant Number 314838170.
2310.05566
Aggregated f-average Neural Network applied to Few-Shot Class Incremental Learning
Ensemble learning leverages multiple models (i.e., weak learners) on a common machine learning task to enhance prediction performance. Basic ensembling approaches average the weak learners outputs, while more sophisticated ones stack a machine learning model in between the weak learners outputs and the final prediction. This work fuses both aforementioned frameworks. We introduce an aggregated f-average (AFA) shallow neural network which models and combines different types of averages to perform an optimal aggregation of the weak learners predictions. We emphasise its interpretable architecture and simple training strategy, and illustrate its good performance on the problem of few-shot class incremental learning.
Mathieu Vu, Emilie Chouzenoux, Ismail Ben Ayed, Jean-Christophe Pesquet
2023-10-09T09:43:08Z
http://arxiv.org/abs/2310.05566v3
# Aggregated \(f\)-average Neural Network for Interpretable Ensembling ###### Abstract Ensemble learning leverages multiple models (i.e., weak learners) on a common machine learning task to enhance prediction performance. Basic ensembling approaches average the weak learners outputs, while more sophisticated ones stack a machine learning model in between the weak learners outputs and the final prediction. This work fuses both aforementioned frameworks. We introduce an _aggregated_\(f\)-_average_ (AFA) shallow neural network which models and combines different types of averages to perform an optimal aggregation of the weak learners predictions. We emphasise its interpretable architecture and simple training strategy, and illustrate its good performance on the problem of few-shot class incremental learning. ensemble learning, estimator aggregation, weakly supervised learning, few-shot learning, incremental learning. ## I Introduction ### _Ensemble learning_ Ensemble learning (or ensembling) is a set of methods which leverage an ensemble of models (also called weak learners), instead of relying on a single learner to perform a given machine learning task (e.g., classification). While ensembling is obviously more demanding in terms of computing resources, it can achieve better accuracy and generalisation, improve overall stability, and reduce prediction variance and bias. Two main phases are identified in the process of building an ensemble model, namely the training of weak learners and the fusion of outputs [1]. The former focuses on producing an ensemble of diverse models, which is a crucial step in ensemble learning [2, 3, 4, 5]. For example, bootstrap aggregating (or bagging) [6] trains each model on a different subset of the training data to produce diverse weak learners. Output fusion gathers outputs from each weak learner of the ensemble and combines them to produce the final prediction [7, 8, 9]. One could distinguish two categories of methods for output fusion in ensemble learning [1]. The most basic one is to average weak learners outputs or, in the case of classification, to use a majority voting scheme [10]. Different types of averages could be used (e.g., arithmetic, geometric, harmonic, etc) and weights could be included to further refine results. Those weights can be set using various kinds of criteria, for example based on weak learners isolated performance [11]. The second category of methods uses meta-learners. They consist in plugging an additional model, responsible for taking advantage of the weak learners. Mixture of experts is a popular variant of meta-learners, where a gating network selects the weak learner that is most suited to produce the correct prediction given a certain input [12, 13]. A more straightforward output fusion based on meta-learners is the stacking of an additional learning model. Taking weak learners output as input, it learns the best combination to assemble the unified prediction [14, 15]. ### _Contribution_ Our contribution, in this work, is to introduce _aggregated_\(f\)-_average_ (AFA) neural networks (NNs), based on a novel architecture for the output fusion phase of ensemble learning. It consists of a shallow neural network modelling different types of averages (arithmetic, geometric, harmonic, etc.) and is able, through supervised learning, to combine and/or select them optimally. Thanks to a specific architecture including original nonlinear activations and constrained weights, it is easily interpretable. To illustrate the performance of AFA neural networks upon the state-of-the-art, we describe their application and implementation in the currently popular setting of few-shot class incremental learning (FSCIL). The paper is organised as follows. First, in section II, we present the architecture of our AFA model along with its training process. We then introduce, in section III, the FSCIL problem and describe our ensembling approach in this context. Experiments on several datasets highlight the benefits of our model, when compared with other ensemble output fusion methods. ## II Methodology ### _Ensembling through averaging_ Let \(K\) machine learning models trained for a common task (e.g., classification), produce \(K\) outputs \((x_{k})_{1\leq k\leq K}\), assumed to be vectors in \(\mathbb{R}^{N}\). In ensemble learning, those \(K\) outputs are combined during an output fusion phase in order to produce a single, expectably better, prediction for the task at hand. A naive method is to average the outputs. We summarize in Table I common expressions for weighted averages, with \((\omega_{k})_{1\leq k\leq K}\) nonnegative reals such that \(\sum_{k=1}^{K}\omega_{k}=1\). ###### Abstract We consider the \(f\)-average of weighted averages of weighted averages of weighted averages of weighted averages of weighted averages of weighted averages of weighted averages of weighted averages of weighted averages of weighted averages of weighted averages of weighted averages of weighted averages of weighted averages of weighted averages of weighted averages of weighted averages of weighted averages of weighted averages of weighted averages of weighted averages of weighted averages of weighted averages of weighted averages of weighted averages of weighted averages of weighted averages of weighted averages of weighted averages of weighted averages of weighted weighted averages of weighted averages of weighted weighted averages of weighted weighted averages of weighted weighted averages of weighted weighted averages of weighted weighted averages of weighted weighted averages of weighted weighted averages of weighted weighted weighted averages of weighted weighted weighted averages of columnwise the inputs \((x_{k})_{1\leq k\leq K}\). The resulting joint aggregate estimate of the \(J\) outputs \((\widetilde{x}_{k})_{1\leq j\leq J}\) is defined as \[\widetilde{x}=\sum_{j=1}^{J}A_{j}\tilde{x}_{j}=A\begin{bmatrix}\tilde{x}_{1}\\ \vdots\\ \tilde{x}_{J}\end{bmatrix}, \tag{10}\] with, for every \(j\in\{1,\ldots,J\}\), \(A_{j}\in[0,+\infty)^{N\times N}\), and \(A\in[0,+\infty)^{N\times NJ}\) is the rowwise stacking of \((A_{j})_{1\leq j\leq J}\) matrices. Operations (8)-(10) are equivalent to plug \(\mathbf{x}\) as the input of a neural network with \(J\) sub-networks of the form presented in Figure 1, operating in parallel, followed by a linear layer involving the weight matrix \(A\). We further add a final activation function, \(g\colon\mathbb{R}^{N}\to\mathbb{R}^{N}\), to control the domain of the output. For instance, a softmax activation can be used to get nonnegative outputs summing to one, in a classification context. The resulting network, called _aggregated \(f\)-average_, is displayed in Figure 2. It has a limited number \(JN^{2}(K+1)\) of linear parameters, namely the entries of matrices \(W_{1},\ldots,W_{J}\), and \(A\). The training of these parameters can follow a classical supervised learning approach. Given a sample and its ground truth, the task model loss is computed (e.g., a cross-entropy loss for classification), before updating the weights from all layers using a backpropagation algorithm (e.g., Adam optimizer [17]). Constraints on weight matrices \((W_{j})_{1\leq j\leq J}\) can simply be imposed through a projection step of each row of these matrices on the convex unit simplex set [18], performed after each backpropagation update. Let us remark that this model maintains the interpretability properties of the \(J\)\(f\)-average sub-models it contains. Furthermore, weights from matrix \(A\) can be viewed as the contribution level of each type of average model. ## III Aggregated \(f\)-average for few shot incremental learning ### _FSCIL framework and datasets_ We now illustrate the potential of the proposed AFA framework by applying it to the problem of few-shot class incremental learning. FSCIL, recently introduced in [19], focuses on designing machine learning methods that can deal with both incremental [20, 21] and few-shot situations [22, 23, 24, 25]. Specifically, FSCIL aims at including an increasing number of categories in a classification problem with the extra constraint that only a small number of training samples are available for upcoming classes. This is motivated by the frequent practical situation in computer vision when a model, built to classify certain categories seen during training phase, must be adjusted to classify images belonging to new classes with only (very) few annotations. The main difficulty revolves around the ability to learn new classes while preventing _catastrophic forgetting_ of classes previously learned, yielding poor performance of standard incremental learning methods. The FSCIL setting considered here consists of \(K\) successive sessions that incrementally provide new categories of images to be classified. First session (\(k=1\)), also called base session, is a standard classification problem with a large number \(n_{\text{train}}\) of training samples for each of the \(n_{\text{class\_base}}\) base classes. In subsequent sessions (\(k\in\{2,\ldots,K\}\)), only a small additional number \(n_{\text{shots}}\) of training samples is provided for a limited number \(n_{\text{way}}\) of novel classes. At each session \(k\), the number of classes to predict is \[N_{k}=n_{\text{class\_base}}+(k-1)n_{\text{way}}. \tag{11}\] Our experiments focus on four typical FSCIL datasets, which characteristics are summarized in Table III. Fig. 2: Structure of the proposed aggregated \(f\)-average neural network. It aggregates \(J\)\(f\)-averages for ensembling, with \(A\in[0,+\infty)^{N\times NJ}\). The activation function \(g:\mathbb{R}^{N}\to\mathbb{R}^{N}\) is selected according to the task (e.g, softmax for classification, linear for regression). ### _Application of aggregated \(f\)-average_ We address FSCIL as the ensembling of successive few-shot learning problems [30]. For each session, we train a few-shot learning classifier that serves as a weak classifier (i.e., weak learner) specialised on its own session in our ensemble model. During base session, the weak classifier is a standard classifier with a ResNet-18 architecture [31] trained thanks to the larger training set. Then, for the remaining sessions (\(k\geq 2\)), its backbone (i.e., all layers except the last fully connected layer) is frozen and used as feature extractor. For these sessions, that last fully connected layer (specialised on base session) is replaced by a few-shot learning method. We use a state-of-the-art nearest-neighbour classifier that determines the estimated class of a test image by retrieving the closest mean centroid in the feature space [32]. It is trained to classify the \(n_{\text{way}}\) new classes provided by the current session. As described in Figure 5, for a given \(K\), our aggregated \(f\)-average model takes as input the outputs from sessions \(k\in\{1,\ldots,K\}\), of size \(N=N_{K}\) (i.e., the total number of classes, defined in (11)), to form a vector \(\boldsymbol{x}\in\mathbb{R}^{KN_{K}}\). As the weak classifiers from earlier sessions are trained to predict a fewer number of classes, a zero-padding is performed for its prediction vector in order to reach output size \(N_{K}\). Our AFA model is trained for \(100\) epochs, minimising a cross-entropy loss with the Adam algorithm [17], using an initial learning rate of \(10^{-1}\) decreased by a factor \(10\) at epochs \(40\) and \(70\). The training uses a _prototype rehearsal_ strategy [33] which consists of collecting predictions from all previous weak classifiers on training sets from all previous sessions to form the training set of the ensemble learning model. It is then fed to the model as mini-batches of size 64. We set \(J=4\) and \((f_{j})_{1\leq j\leq J}\) chosen so as to aggregate four different types of means, namely arithmetic, geometric, harmonic, and quadratic (power-2), using the activation functions presented in Table II with \(\epsilon=10^{-4}\). Weights of matrices \((W_{j})_{1\leq j\leq J}\) and matrix \(A\) are initialised with entries sampled randomly following a uniform distribution between \(0\) and \(\frac{1}{KN_{K}}\) before being projected onto the simplex to comply with the sum-to-one constraint. Our model and all experiments were implemented using Tensorflow [34]. ### _Results_ We compare our proposed model performance, for increasing values of \(K\), against classic ensembling methods for classification performing output fusion. We try out different types of averages (namely, arithmetic, geometric, harmonic), as well as a majority vote scheme. Performance were also compared against three different types of ensembling neural network: 1) a shallow neural network with similar number of parameters and layers as our AFA model, 2) a deeper neural network with five fully connected layers, and 3) a neural network specifically designed for ensembling including Fig. 3: Comparison of ensemble learning output fusion methods, on FSCIL datasets, in terms of averaged \(F_{1}\) score over all classes, for various sessions \(k\). The closer the \(F_{1}\) score is to 1, the better. As sessions are incrementally added, the classification task becomes increasingly more difficult because of the additional categories to be handled. a weighted average layer followed by a fully connected layer for the output [35]. All neural network models, including the AFA model, were trained with the same process with only slight adjustments on learning rate parameters to adapt to each model architecture. The results, in terms of accuracy and \(F_{1}\) scores (the larger, the better), are summarized in Table IV and Figure 3. Note that the mean accuracy over all classes might hide performance discrepancies between base and new classes [36], as can be seen in Table IV. Typically, a model trained on base session only (called _No Fine-tuning_ in Table IV), reaches a decent mean accuracy thanks to its performance on base session classes (\(k=1\)), while it yields very poor performance on new classes (\(k>1\)). In contrast, our approach AFA performs an effective incremental learning, with good accuracy on both base and new classes. Classic averaging shows inconsistent results (see Figure 3), as the optimal type of average depends on the session number and on the dataset. The majority vote scheme seems to produce worse results in early sessions (low \(K\)), when few voters are available, and a gain in performance with an increasing number of voters. By design, the proposed ensemble method models those different types of average, and learns their optimal combination, thus producing significantly better performance in every session. inconsistent results; its performance catastrophically drop in later sessions (higher \(K\)) showing training issues. Indeed, these later sessions drastically increase its architecture size due to the higher number \(K\) of models to ensemble and the higher number of classes \(N_{K}\) to predict, while not providing a significantly larger training set because of the few-shot constraint of our task at hand. Figure 4 illustrates these behaviours. It presents training losses of the different stacked neural network models, including the proposed AFA, on a specific example. The plots demonstrate our model' stability and fast convergence to a lower loss value than other methods which exhibit a plateau at a higher value. The deep NN loss behavior shows the inability to train a large number of parameters on such limited amount of data, with a validation loss diverging. ## IV Conclusion In this article, we presented a novel output fusion method for ensemble learning. Inspired by basic methods, namely averaging and model stacking, our method relies on an original neural network architecture that models multiple types of averages. It is trained to learn an optimal and interpretable selection and/or combination of the multiple weak learners inputs. We illustrated its operability and efficiency on the problem of few-shot class incremental learning where it significantly outperforms standard ensemble learning output fusion methods.
2305.02901
Single Node Injection Label Specificity Attack on Graph Neural Networks via Reinforcement Learning
Graph neural networks (GNNs) have achieved remarkable success in various real-world applications. However, recent studies highlight the vulnerability of GNNs to malicious perturbations. Previous adversaries primarily focus on graph modifications or node injections to existing graphs, yielding promising results but with notable limitations. Graph modification attack~(GMA) requires manipulation of the original graph, which is often impractical, while graph injection attack~(GIA) necessitates training a surrogate model in the black-box setting, leading to significant performance degradation due to divergence between the surrogate architecture and the actual victim model. Furthermore, most methods concentrate on a single attack goal and lack a generalizable adversary to develop distinct attack strategies for diverse goals, thus limiting precise control over victim model behavior in real-world scenarios. To address these issues, we present a gradient-free generalizable adversary that injects a single malicious node to manipulate the classification result of a target node in the black-box evasion setting. We propose Gradient-free Generalizable Single Node Injection Attack, namely G$^2$-SNIA, a reinforcement learning framework employing Proximal Policy Optimization. By directly querying the victim model, G$^2$-SNIA learns patterns from exploration to achieve diverse attack goals with extremely limited attack budgets. Through comprehensive experiments over three acknowledged benchmark datasets and four prominent GNNs in the most challenging and realistic scenario, we demonstrate the superior performance of our proposed G$^2$-SNIA over the existing state-of-the-art baselines. Moreover, by comparing G$^2$-SNIA with multiple white-box evasion baselines, we confirm its capacity to generate solutions comparable to those of the best adversaries.
Dayuan Chen, Jian Zhang, Yuqian Lv, Jinhuan Wang, Hongjie Ni, Shanqing Yu, Zhen Wang, Qi Xuan
2023-05-04T15:10:41Z
http://arxiv.org/abs/2305.02901v1
# Single Node Injection Label Specificity Attack on Graph Neural Networks via Reinforcement Learning ###### Abstract Graph neural networks (GNNs) have achieved remarkable success in various real-world applications. However, recent studies highlight the vulnerability of GNNs to malicious perturbations. Previous adversaries primarily focus on graph modifications or node injections to existing graphs, yielding promising results but with notable limitations. Graph modification attack (GMA) requires manipulation of the original graph, which is often impractical, while graph injection attack (GIA) necessitates training a surrogate model in the black-box setting, leading to significant performance degradation due to divergence between the surrogate architecture and the actual victim model. Furthermore, most methods concentrate on a single attack goal and lack a generalizable adversary to develop distinct attack strategies for diverse goals, thus limiting precise control over victim model behavior in real-world scenarios. To address these issues, we present a gradient-free generalizable adversary that injects a single malicious node to manipulate the classification result of a target node in the black-box evasion setting. Specifically, we model the single node injection label specificity attack as a Markov Decision Process (MDP) and propose _Gradient-free Generalizable Single Node Injection Attack_, namely G\({}^{2}\)-SNIA, a reinforcement learning framework employing Proximal Policy Optimization (PPO). By directly querying the victim model, G\({}^{2}\)-SNIA learns patterns from exploration to achieve diverse attack goals with extremely limited attack budgets. Through comprehensive experiments over three acknowledged benchmark datasets and four prominent GNNs in the most challenging and realistic scenario, we demonstrate the superior performance of our proposed G\({}^{2}\)-SNIA over the existing state-of-the-art baselines. Moreover, by comparing G\({}^{2}\)-SNIA with multiple white-box evasion baselines, we confirm its capacity to generate solutions comparable to these of the best adversaries. Graph Neural Networks, Graph Injection Attack, Label Specificity Attack, Reinforcement Learning ## I Introduction Graph Neural Networks (GNNs), a subfield of deep learning methods, have garnered significant attention among scholars for their ability to model structured and relational data. The utilization of GNNs has demonstrated a remarkable impact in various applications of graph data mining, including node classification [1, 2, 3, 4], link prediction [5, 6, 7, 8], community detection [9, 10, 11, 12], and graph classification [13, 14, 15, 16]. Despite their widespread adoption, recent studies have demonstrated the vulnerability of GNNs to adversarial attacks [17, 18]. Imperceptible but intentionally-designed perturbations on graphs can effectively mislead GNNs to make incorrect predictions. Pioneer attack methods against GNNs typically follow the setting of graph modification attack (GMA) [19, 20, 21, 22, 23, 24, 25, 26, 27, 28], where adversaries can directly modify the relationships and features of existing nodes. However, these attack strategies have limited practical meaning as they require a high level of access authority over the graph, which is impractical under most circumstances. In addition to GMA, another emerging trend in adversarial research focuses on graph injection attack (GIA) [29, 30, 31, 32, 33, 34, 35, 36]. GIA explores a more practical setting where adversaries inject new nodes into the original graph to propagate malicious perturbations, which has proven to be more effective than GMA due to its high flexibility [37]. For instance, in a social network, adversaries do not have permission to alter the existing relationships between users, such as adding or removing friendships. However, adversaries could easily register a fake account, establish a relationship with the target user, and manipulate the behavior of the fake user to dominate the prediction results of GNNs on the target user. Therefore, in this paper, we focus on GIA when performing attacks against GNNs. The key to GIA lies in generating appropriate malicious injected nodes as well as their features that can propagate perturbations along the graph structure to the nodes in the original graph. Several studies, such as [29, 38], have proposed methods for generating injection nodes using statistical or random sampling techniques and influencing existing nodes during the training phase. However, they have been found to have poor performance when directly applied during the inference phase. Additionally, some researches [39, 30, 31, 32], leverage the gradient of the victim model or a surrogate model to generate injection nodes and implement an evasion attack during the inference phase. These methods perform excellently in the white-box evasion setting, but their performance decreases in the black-box evasion setting due to diverging of surrogate architecture and the actual victim model. The phenomenon of performance decreasing is especially pronounced in graphs with discrete node features because the features of the injected node are usually designed as a binary vector to ensure its imperceptibility, resulting in limited disturbance capability. Therefore, we focus on how to select the most effective features of injection nodes within the constraints of a limited attack budget in discrete feature space. Due to the weakness of gradient-based methods, some novel gradient-free methods have emerged. Ju et al. [40] creates a node generator through Advantage Actor-Critic (A2C) [41] algorithm in the black-box evasion setting, but it focuses on global attack and cannot generate efficient vicious nodes for different tasks (i.e. different target nodes and targeted labels). Therefore, when conducting GIA in discrete feature space, adversaries must consider the followings: (1) **Effectiveness.** How to generate malicious features for the injected node to implement an attack. (2) **Efficiency**. The resulting combinatorial optimization problem is NP-hard, so how can one efficiently search for a good suboptimal solution? (3) **Generalizability**. How to design a generalizable attack algorithm for misleading the victim model into assigning specific labels to different target nodes? Besides the above three considerations, in this work, we concentrate on the most challenging and realistic scenario, where the adversary is limited to injecting only one malicious node to control the classification result of a single target node in the black-box evasion setting, namely single node injection label specificity attack.In this scenario, the adversary only has access to the connection relationships between nodes and node features, with the permission of querying the victim model in the inference phase. we propose _Gradient-free Generalized Single Node Injection Attack_, namely G\({}^{2}\)-SNIA, to handle a multitude of diverse attack goals (i.e., varying target nodes and targeted labels) in the black-box evasion setting. Our approach adopts a direct attack strategy that directly affects the target node through the injected node due to the aggregation process of GNNs.We represent the sequential addition of features of the injected node as a Markov Decision Process (MDP) and map the process of adding a feature to a set of discrete actions. To solve this NP-hard problem, we use the Proximal Policy Optimization (PPO) [42] algorithm which can improve the performance of the deep reinforcement learning (DRL) agent by leveraging the reward function instead of the surrogate gradients. Our experiments show that the trained DRL agent performs effectively even in the presence of a large action space. The key contributions of the paper are as follows: * This study is the pioneering work that investigates graph injection attack in the black-box evasion setting for a diverse range of attack goals, without relying on surrogate gradient information. * We meticulously formulate the black-box single node injection label specificity attack as an MDP. To dominate the predictions of GNNs trained on graphs with discrete node features, we propose G\({}^{2}\)-SNIA, a novel gradient-free generalizable attack algorithm based on a reinforcement learning framework to generate effective yet imperceptible perturbations for various attack goals. * With comprehensive experiments over three acknowledged benchmark datasets and four renowned GNNs in the black-box evasion setting, we demonstrate G\({}^{2}\)-SNIA outperforms the current state-of-the-art attack methods in terms of attack effectiveness. Specifically, we achieve an average improvement of approximately 5% over the best baselines in terms of attack success rate. * In addition, we compare the performance of G\({}^{2}\)-SNIA with several baselines that operate within the white-box evasion setting. Our experimental results indicate that even in black-box conditions, G\({}^{2}\)-SNIA is capable of producing solutions that are comparable to those generated by the baselines. This highlights the efficacy of G\({}^{2}\)-SNIA in achieving successful attacks despite the absence of knowledge about the victim GNNs. The remainder of the paper is organized as follows: In Section II, we review relevant literature on adversarial attacks on GNNs and graph injection attack on GNNs. Section III provides a formal definition of the single node injection label specificity attack problem. Our proposed solution, G\({}^{2}\)-SNIA, is presented in Section IV. Section V describes our experimental results. Finally, Section VI concludes with a summary and an outline of promising directions for future work. ## II Related Work ### _Adversarial Attacks on GNNs_ In most existing studies, adversaries are launched by modifying edges and features of the original graph [19, 20, 21, 22, 26, 28, 43], significantly degrading the performance of GNN models. Nettack [19] modifies node features and the graph structure guided by the surrogate gradient. RL-S2V [20] uses reinforcement learning to flip edges. MGA [22] (Momentum Gradient Attack) combines the momentum gradient algorithm to achieve better attack effects. DSEM [43] assigns a specific label to a target node by adding the node with the highest degree from the targeted label node set in the entire graph to the target node in the black-box evasion setting. However, pioneer graph modification attack (GMA) methods require high authority making them infeasible to implement in real-world scenarios. ### _Graph Injection Attack on GNNs_ To address the dilemma, research on graph injection attack (GIA), a more realistic attack type, has surged. GIA involves the injection of vicious nodes, rather than modifying the original graph. NIPA [29] generates injected node features by adding Gaussian noise to the mean of node features and executes attack during the training phase. GANI [38] computes the frequency of feature occurrences among nodes sharing the same label and selects the most frequent feature as the injected node feature to poison nodes in the original graph. GB-FGSM [39] is gradient-based greedy method that calculate the gradient of the victim model in the white-box evasion setting and the gradient of the surrogate model in the black-box evasion setting to generate malicious injected nodes. G-NIA [31] employs a neural network to model the attack process for preserving learned patterns and implements attacks in the inference phase. In work that is most closely related to ours, G\({}^{2}\)A2C [40] leverages a reinforcement learning algorithm to generate injected node features in the black-box evasion setting. However, there are several key differences between G\({}^{2}\)A2C and our proposed method, G\({}^{2}\)-SNIA: (1) G\({}^{2}\)A2C focuses on global attack, whereas G\({}^{2}\)-SNIA is a generalizable adversary that aims to mislead the victim model into assigning specific labels to different target nodes. (2) G\({}^{2}\)A2C conducts attacks by injecting multiple nodes, while G\({}^{2}\)-SNIA allows for the injection of only one extra node and one edge connected to the target node in order to preserve the predictions of the victim model for other nodes in the original graph. (3) There are major differences in the reward functions used by G\({}^{2}\)A2C and G\({}^{2}\)-SNIA. (4) G\({}^{2}\)A2C employs off-policy reinforcement learning algorithm, while we adopt on-policy method, which can significantly reduce memory overhead. ## III Preliminaries ### _GNNs for Node Classification Task_ **Node Classification**. Following the standard notations in the literature [17], we define \(G=(V,E,\mathbf{X})\) as an attributed graph with \(N\) nodes, where \(V=\{v_{i}\mid i=1,2,\ldots,N\}\) represents the set of nodes, \(E=\{e_{ij}=(v_{i},v_{j})\mid v_{i},v_{j}\in V\}\) represents the set of edges between nodes, and \(\mathbf{X}=[\mathbf{x}_{1},\mathbf{x}_{2},\ldots,\mathbf{x}_{N}]^{T}\in\{0,1\} ^{N\times F}\) is the feature matrix, where \(\mathbf{x}_{i}\) denotes the feature vector for node \(v_{i}\) and \(F\) is the feature dimension. The adjacency matrix \(\mathbf{A}\in\{0,1\}^{N\times N}\) contains the information about the node connections, where each component \(\mathbf{A}_{ij}\) represents whether the edge \(e_{ij}\) exists in the graph. For simplicity, we use \(G=(\mathbf{A},\mathbf{X})\) to refer to an unweighted and undirected attributed graph in this paper. We assign a ground truth label \(y_{i}\in\mathcal{Y}=\{1,2,\ldots,Y\}\) to node \(v_{i}\) in the graph, where \(Y\) denotes the total number of labels. In many real-world scenarios, label information is only available for a limited number of nodes. Therefore, we partition the nodes in the graph into training and test sets. The training set \(V_{L}\subset V\) consists of labeled nodes where each node \(v_{i}\in V_{L}\) is associated with \(y_{i}\in\mathcal{Y}\). The test set \(V_{U}\subset V\) consists of unlabeled nodes whose labels need to be predicted, which ensures that \(V_{L}\cap V_{U}=\emptyset\). We also define a set of target nodes \(V_{tar}\subset V_{U}\) that each node will be attacked. The goal of node classification is to assign labels to each unlabeled node \(v\in V_{U}\) by a classifier \(\mathbf{Z}^{(G)}=f_{\theta}(G)\), where \(\mathbf{Z}^{(G)}\in\mathbb{R}^{N\times Y}\) represents the probability distribution matrix, and \(\theta=\{\mathbf{W}^{(1)},\mathbf{W}^{(2)},\ldots\}\) represents parameter set of classifier. **Graph Neural Networks**. A typical GNN layer applied to a target node \(v_{t}\) can be expressed as an aggregation process: \[\mathbf{h}_{v_{t}}^{l+1}=\phi(\alpha_{v_{t},v_{t}}^{l}\mathbf{h}_{v_{t}}^{l}+ \sum_{v_{u}\in V\backslash v_{t}}\alpha_{v_{t},v_{u}}^{l}\mathbf{h}_{v_{u}}^{l }), \tag{1}\] where \(\phi(\cdot):\mathbb{R}^{F_{in}}\rightarrow\mathbb{R}^{F_{out}}\) is a vector-valued function and \(\alpha_{v_{i},v_{j}}^{l}\) represents the weight assigned to the feature \(\mathbf{h}_{v_{j}}^{l}\) of node \(v_{j}\) during the aggregation process on node \(v_{i}\). We have \(\mathbf{h}_{v_{i}}^{0}=\mathbf{x}_{i}\) as initial. Taking the GCN as an example: \[\mathbf{h}_{v_{t}}^{l+1}=\phi(\frac{1}{d_{v_{t}}}\mathbf{h}_{v_{t}}^{l}+\sum_{ v_{u}\in\mathcal{N}_{1}(v_{t})}\frac{1}{\sqrt{\tilde{d}_{v_{t}}\tilde{d}_{v_{u} }}}\mathbf{h}_{v_{u}}^{l}), \tag{2}\] where \(\tilde{d}_{v_{i}}=\tilde{\mathbf{D}}_{ii}\), \(\tilde{\mathbf{D}}\in\mathbb{R}^{N\times N}\) is the degree matrix of \(\hat{\mathbf{A}}\in\mathbb{R}^{N\times N}\), \(\hat{\mathbf{A}}=\mathbf{A}+\mathbf{I}\) is the adjacency matrix with self-loops, \(\mathcal{N}_{1}(v_{t})\) is the one-hop neighbors of node \(v_{t}\). Here, \(\phi(\mathbf{h})=\sigma(\mathbf{h}\mathbf{W})\), where \(\mathbf{h}\in\mathbb{R}^{F_{in}}\) is an input vector, \(\sigma(\cdot)\) is the activation function and \(\mathbf{W}\in\mathbb{R}^{F_{in}\times F_{out}}\) is a weight matrix for transformation. ### _Problem Definition_ The goal of the single node injection label specificity attack is to control the classification result of a single target node in extremely limited setting, while minimizing the impact on other nodes in the graph. We define the adversarial graph \(\hat{G}=(\hat{\mathbf{A}},\hat{\mathbf{X}})\) as the original graph \(G\) after undergoing small perturbations. The new graph can be expressed as: \[\hat{\mathbf{A}}=\begin{bmatrix}\mathbf{A}&\hat{\mathbf{e}}\\ \hat{\mathbf{e}}^{T}&0\end{bmatrix},\hat{\mathbf{X}}=\begin{bmatrix}\mathbf{X} \\ \hat{\mathbf{x}}^{T}\end{bmatrix}. \tag{3}\] Here \(\hat{\mathbf{e}}\in\{0,1\}^{N}\) represents the relationship between original nodes and the injected node where \(\hat{\mathbf{e}}_{i}=1\) if the injected node \(v_{inj}\) is connected to the original node \(v_{i}\in V\). Moreover, \(\hat{\mathbf{x}}\in\{0,1\}^{N}\) denotes the feature vector for the injected node \(v_{inj}\). We formalize the objective function as: \[\begin{split}\min_{\hat{G}}&\mathcal{L}_{atk}(v_{t},y_{t},\hat{G})= -\ln\hat{\mathbf{Z}}_{t,yt}^{(\hat{G})}\\ s.t.&\theta^{*}=\operatorname*{arg\,min}_{\theta}\sum_{v_{i}\in V_{L}}- \ln\mathbf{Z}_{v_{i},y_{i}}^{(G)}\\ &\|\hat{\mathbf{e}}\|_{0}\leq\Delta_{e}\\ &\|\hat{\mathbf{x}}\|_{0}\leq\Delta_{f},\end{split} \tag{4}\] where \(v_{t}\in V\) is the target node to be attacked, \(y_{t}\) is the targeted label, \(\hat{G}\) is the perturbed graph, \(\hat{\mathbf{Z}}^{(\hat{G})}=f_{\theta*}(\hat{G})\) is the probability distribution matrix after perturbed, \(\|\cdot\|_{0}\) is the \(L_{0}\) norm, \(\theta^{*}\) is the parameter set of classifier with optimized on the original graph, and \(\mathcal{L}_{atk}\) is the negative cross-entropy loss with respect to the targeted label \(y_{t}\) for the target node \(v_{t}\). This means that the objective of adversary is to maximize the confidence level of targeted label on the target node. In order to achieve attack goal, the injected node only allow to connect to the target node i.e., \(\hat{\mathbf{e}}_{i}=1\) if \(v_{i}=v_{t}\) and \(\Delta_{e}=1\). Thus, the perturbation is limited to the generation of the malicious feature of the injected node within the feature budget \(\Delta_{f}\). For the convenience of the reader, the notations used in the paper are summarized in Table I. ## IV Proposed Method We present the _Gradient-free Generalizable Single Node Injection Attack_ (G\({}^{2}\)-SNIA), a novel approach for performing single node injection label specificity attack in the black-box evasion setting. Our proposed method combines generalization capabilities with high attack performance. The overall framework of G\({}^{2}\)-SNIA is illustrated in Fig. IV. The key idea behind our proposed framework is to use a DRL agent to iteratively perform actions to fool the victim model. More specifically, given a graph \(G\), a target node \(v_{t}\) with targeted label \(y_{t}\), and an initial injected node \(v_{inj}\) connected to \(v_{t}\) with initial feature vector \(\hat{\mathbf{x}}\in\{0\}^{F}\), the DRL agent adds a feature (i.e., a word from the bag-of-words) to the injected node in a step-by-step manner. In the following sections, we describe the DRL environment and the PPO algorithm used to train the DRL agent. ### _Environment_ We model the proposed single node injection label specificity attack using a model-free Markov Decision Process (MDP) \(\langle\mathcal{S},\mathcal{A},\mathcal{P},\mathcal{R},\gamma\rangle\). Here, \(\mathcal{S}\) refers to the set of states, \(\mathcal{A}\) denotes the set of actions, \(\mathcal{R}\) is the reward function, and \(\gamma<1\) is the discount factor that represents the relative importance of immediate rewards versus future rewards. In model-free methods, the state transition probability function \(\mathcal{P}\) is often considered unknown, and the objective of DRL agent is to learn the optimal policy directly without necessarily learning the complete model of the environment. **State**. The state \(s_{t}\) contains the intermediate adversarial graph \(\hat{G}_{t}\) at time step \(t\), a target node \(v_{t}\), and a targeted label \(y_{t}\). To capture information about the target node \(v_{t}\) in the non-Euclidean structure of the adversarial graph \(\hat{G}_{t}\), we extract the 2-hop subgraph \((c_{v_{t}},\hat{G}_{t})\) of the target node \(v_{t}\), which combines both topological and feature information to represent node information. Then we group all nodes in original graph based on their classification results by the classifier and pooling the node information in the each group as the targeted label information \(\mathbf{l}_{y_{t}}\). **Action**. In our attack scenario, we constrain the injected node \(v_{inj}\) to connect exclusively with the target node \(v_{t}\), thereby enabling us to focus on generating the feature vector for the injected node \(v_{t}\). To this end, we employ an iterative strategy whereby the action taken at time step \(t\), denoted by \(a_{t}\), involves adding a feature to the injected node \(v_{t}\) (i.e., \(\hat{\mathbf{x}}_{i}=1\) if the DRL agent selects a feature \(i\)). In addition, by employing invalid action masking technique, we define a mask \(\mathbf{m}_{t}\) at time step \(t\) to prevent repeated selection of actions within an episode. The trajectory of our proposed model-free MDP is given by \((s_{0},a_{0},r_{0},s_{1},a_{1},\ldots,s_{T-1},a_{T-1},r_{T-1},s_{T})\), where \(s_{T}\) denotes the terminal state and \(r_{t}\) represents the intermediate reward that depends on the current state \(s_{t}\) and the decision \(a_{t}\) made at time step \(t\). **Reward**. A well-designed reward function is essential to ensuring the success of a deep reinforcement learning algorithm in achieving convergence. In light of the prolonged learning trajectory for the DRL agent in its environment, providing diverse intermediate rewards to the DRL agent at various states along the trajectory can be more advantageous than furnishing a sparse reward on the terminal state or endowing multiple inefficient rewards such as \(\pm 1\) throughout the trajectory. This is due to the fact that diverse intermediate rewards can provide invaluable guidance to the DRL agent as it explores the environment, leading to more efficient decisions and ultimately Fig. 1: The overall framework of G\({}^{2}\)-SNIA. improving its overall performance. The proposed guiding reward denoted as \(r_{t}\) at time step \(t\) is defined as follows: \[\begin{split} r_{t}&=\mathcal{L}_{atk}(v_{t},y_{t}, \hat{G}_{t})-\mathcal{L}_{atk}(v_{t},y_{t},\hat{G}_{t+1})\\ &=-\ln\hat{\mathbf{Z}}_{v_{t},y_{t}}^{(\hat{G}_{t})}+\ln\hat{ \mathbf{Z}}_{v_{t},y_{t}}^{(\hat{G}_{t+1})}.\end{split} \tag{5}\] Here, we leverage the discrepancy in entropy as the reward function, which is computed by the classifier on the adversarial graph across consecutive time intervals. This design guides the DRL agent towards taking actions that maximize the reduction of entropy in each step along its trajectory. **Terminal**. The attack budget \(\Delta_{f}\) is imposed as a constraint on the number of allowed malicious features added to ensure the imperceptibility of the injected node \(v_{inj}\). Thus, when the DRL agent has added the maximum number of features (\(\Delta_{f}\)) to the injected node \(v_{inj}\), it ceases to take any further actions. At the terminal state \(s_{T}\), the adversarial graph \(\hat{G}_{T}\) comprises only an additional injected node and an extra edge connecting the injected node to the target node, in addition to those already present in the original graph \(G\). ### _Single Node Injection Label Specificity Attack via PPO_ **Embedding of State**. In the aforementioned context, the state \(s_{t}\) encompasses the intermediate adversarial graph \(\hat{G}_{t}\) at time step \(t\), along with the target node \(v_{t}\) and the targeted label \(y_{t}\). To extract the target node information, G\({}^{2}\)-SNIA utilizes graph convolutional aggregation on the 2-hop subgraph \((c_{t},\hat{G}t)\) of the target node \(v_{t}\), which integrates both the topological and feature information. The representation of the target node \(v_{t}\) denoted as \(\mathbf{n}_{v_{t}}\in\mathbb{R}^{F}\) is defined as follows: \[\mathbf{n}_{v_{t}}=\frac{1}{\hat{d}_{v_{t}}}\mathbf{X}_{v_{t}}+\sum_{v_{u}\in \hat{N}_{t}(v_{t})}\frac{1}{\sqrt{\hat{d}_{v_{t}}\hat{d}_{v_{u}}}}\mathbf{X}_{ v_{u}}, \tag{6}\] where \(\hat{d}_{i}\) is the degree of node \(i\) in subgraph \((c_{t},\hat{G}_{t})\) with self-loop, and \(\hat{\mathcal{N}}_{1}(v_{t})\) refers to the set of one-hop neighbors of the target node \(v_{t}\) in the same 2-hop subgraph. To improve the representation of the targeted label \(y_{t}\) and ensure dimension unification of label representation and node representation, a strategy of utilizing multiple node representations to represent labels is employed instead of using one-hot encoding. This involves grouping all nodes in the original graph \(G\) based on their classification results obtained from the classifier, and pooling the node information within each group. The representation of the targeted label \(y_{t}\) denoted as \(\mathbf{l}_{y_{t}}\in\mathbb{R}^{F}\) is defined as follows: \[\mathbf{l}_{y_{t}}=\mathrm{Pool}(\{\mathbf{H}_{v_{i}}\mid v_{i}\in L_{y_{t}} \}). \tag{7}\] Here, \(L_{y_{t}}=\{v_{i}\mid v_{i}\in V,\underset{y\in\mathcal{Y}}{\arg\max}\mathbf{ Z}_{v_{i},y}^{(G)}=y_{t}\}\) denotes the set of nodes whose label is assigned to \(y_{t}\) by the classifier on the original graph \(G\). The matrix \(\mathbf{H}=\mathbf{D}^{-\frac{1}{2}}\mathbf{\tilde{A}}\mathbf{\tilde{D}}^{- \frac{1}{2}}\mathbf{X}\in\mathbb{R}^{N\times F}\) represents the nodes in the original graph \(G\) using graph convolutional aggregation. \(Pool\) is a pooling function and we adopt mean pooling in this work. In summary, the embedding of state at time step \(t\) is represented by \(\mathcal{E}_{t}\in\mathbb{R}^{2F}\), which can be expressed as follows: \[\mathcal{E}_{t}=\mathrm{Concat}(\mathbf{n}_{v_{t}},\mathbf{l}_{y_{t}}), \tag{8}\] where \(\mathrm{Concat}\) denotes the concatenation operation of vectors, \(\mathbf{n}_{v_{t}}\) is the representation of the target node \(v_{t}\) and \(\mathbf{l}_{y_{t}}\) is the representation of the targeted label \(y_{t}\). **Policy Network**. We leverage a multilayer perceptron (MLP) as the policy network to generate an unbounded vector. It is not appropriate to utilize this vector directly as a probability distribution of actions even after applying the \(Softmax\) function due to the potential for repeated action selection within an episode. To address this issue, the output of the policy network undergoes additional processing to ensure that it represents a valid probability distribution. Consequently, the resulting policy denoted as \(\pi_{\theta}(\cdot|s_{t})\in\mathbb{R}^{F}\) is defined as follows: \[\pi_{\theta}(\cdot|s_{t})=\mathrm{Softmax}(\mathrm{MLP}(\mathcal{E}_{t})\odot \mathbf{m}_{t}+\mathcal{I}\odot(1-\mathbf{m}_{t})), \tag{9}\] where \(\theta\) represents the parameter set of the policy network, \(\mathbf{m}_{t}\) is the mask at time step \(t\) which prevents the selection of actions that have already been taken in the current episode, \(\odot\) represents Hadamard product, and \(\mathcal{I}\in\{-\infty\}^{F}\) is a constant vector with each element set to negative infinity.The \(Softmax\) function is applied to the output of the policy network after processing, resulting in the probability distribution of unmasked actions \(a\) given state \(s_{t}\), denoted as \(\pi_{\theta}(a|s_{t})\), with the probabilities of masked actions assigned to zero to prevent repeated selection. Intriguingly, the modes of action acquisition for the training and inference phases differ starkly. Throughout the training phase, we leverage a Gumbel-Max trick [44] to procure an action from the probability distribution. In contrast, the inference phase employs a more straightforward approach, selecting the action with the highest probability from the given distribution. **Value Network**. Utilizing another MLP as a value network, the value function estimates the expected return of a given state, which is employed to evaluate the desirability of actions in the present state to enable the DRL agent in improving its strategy. The value function denoted as \(V_{\theta}(s_{t})\in\mathbb{R}\) can be formalized as: \[V_{\theta}(s_{t})=\mathrm{MLP}(\mathcal{E}_{t}), \tag{10}\] where \(\theta\) represents the parameter set of the value network. **Training Algorithm**. The optimization of the G\({}^{2}\)-SNIA model requires an iteration procedure that iteratively alternates between experience collection and parameter update until optimal performance is achieved. In the experience collection phase, the interactions between the DRL agent and the environment are recorded as \((s_{t},r_{t},a_{t},\pi(a_{t}|s_{t}))\) in a replay buffer \(\mathcal{M}\). To reduce memory and computational demands, we use the state embedding \(\mathcal{E}\) instead of the state, including the adversarial graph, the target node, and the targeted label. During the parameter update phase, we leverage the generalized advantage estimation (GAE) technique [45] to calculate the advantage function from the collected experiences. For example, the advantage function \(A_{t}^{\pi\theta}\) for a given trajectory segment \([t_{a},t_{b}]\) at time step index \(t\) is defined as follows: \[A_{t}^{\pi_{\theta}}=\sum_{l=0}^{t_{b}-t}(\gamma\lambda)^{l}\delta_{t+l}, \tag{11}\] where \(\delta_{t}=r_{t}+\gamma V_{\theta}(s_{t+1})-V_{\theta}(s_{t})\) is Temporal-Difference (TD) error, and \(\lambda\) is a hyper-parameter. Thus, the estimated return \(G_{t}\) can be expressed as: \[G_{t}=A_{t}^{\pi_{\theta}}+V_{\theta}(s_{t}). \tag{12}\] Moreover, the parameter update phase of the G\({}^{2}\)-SNIA model involves three key losses: policy loss \(\mathcal{L}_{p}\), entropy loss \(\mathcal{L}_{e}\) and value loss \(\mathcal{L}_{v}\). To optimize the parameters of the policy \(\pi_{\theta}\), we start by computing the probability ratio \(r_{t}(\theta)\) defined as the ratio of the current policy \(\pi_{\theta}(a_{t}|s_{t})\) to the policy used during the experience collection phase \(\pi_{\theta_{old}}(a_{t}|s_{t})\), i.e., \(r_{t}(\theta)=\frac{\pi_{\theta_{old}}(a_{t}|s_{t})}{\pi_{\theta_{old}}(a_{t}| s_{t})}\). Specifically, \(r_{t}(\theta_{old})=1\) when the first optimization in each parameter update phase. To perform conservative policy iteration, we define the clipped probability ratio \(\hat{r}_{t}(\theta)\) as follows: \[\hat{r}_{t}(\theta)=Clip(r_{t}(\theta),1-\epsilon,1+\epsilon), \tag{13}\] where \(\epsilon>0\) is the clipped coefficient that controls the clip range, and \(Clip(\cdot)\) is the clip function to limit the range of input, expressed as: \[Clip(x,min,max)=\begin{cases}min,&x<min\\ x,&min<x<max.\\ max,&x>max\end{cases} \tag{14}\] Therefore, given a tuple \((s_{t},r_{t},a_{t},\pi(a_{t}|s_{t}))\in\mathcal{M}\), the policy loss \(\mathcal{L}_{p}\) can be expressed as: \[\mathcal{L}_{p}=-\min(r_{t}(\theta)A_{t}^{\pi_{old}},\hat{r}_{t}(\theta)A_{t}^ {\pi_{old}}), \tag{15}\] where \(A_{t}^{\pi_{old}}\) is the advantage function is computed using the policy \(\pi_{\theta_{old}}\) and the value function \(V_{\theta_{old}}\) during the experience collection phase. The policy loss \(\mathcal{L}_{p}\) is designed to enhance the action selection behavior of the DRL agent by increasing the probabilities assigned to better actions. In contrast, the entropy loss \(\mathcal{L}_{e}\) promotes exploration (i.e., the degree of unpredictability in the agent's action selection) and is defined as follows: \[\mathcal{L}_{e}=\sum_{a\in\mathcal{A}}\pi_{\theta}(a|s_{t})\ln\pi_{\theta}(a| s_{t}), \tag{16}\] where \(\mathcal{A}\) is the action space. Lastly, the value loss \(\mathcal{L}_{v}\) is aimed at minimizing the discrepancy between the estimated and actual return that enables the value function to update its predictions regarding future rewards effectively and is calculated as: \[\mathcal{L}_{v}=\begin{cases}\dfrac{1}{2}(V_{\theta}(s_{t})-G_{t})^{2},\quad |V_{\theta}(s_{t})-G_{t}|<1\\ |V_{\theta}(s_{t})-G_{t}|,\quad otherwise\end{cases}. \tag{17}\] Therefore, given a minibatch set \(\mathcal{B}\subset\mathcal{M}\), the final loss for G\({}^{2}\)-SNIA is formulated as: \[\mathcal{L}=-\dfrac{1}{|\mathcal{B}|}\sum_{\mathcal{B}}\mathcal{L}_{p}- \mathcal{L}_{v}+\beta\mathcal{L}_{e}, \tag{18}\] where \(\beta\) is the entropy coefficient that controls the exploration performance of the DRL agent. The overall training framework is summarized by Algorithm 1. ``` Input: clean graph \(G=(A,X)\), target node set \(V_{tar}\), label set \(\mathcal{Y}\), target node \(v_{t}\in V_{tar}\), targeted label \(y_{t}\in\mathcal{Y}\), victim model \(f_{\theta^{*}}(\cdot)\), attack budget \(\Delta_{f}\), training iteration \(K\), experience collection steps \(S\), parameter update steps \(P\) Output: the general attack agent \(\pi_{\theta}\) 1 Initialize the parameters \(\theta_{old}\) of policy \(\pi_{\theta_{old}}\) and value function \(V_{\theta_{old}}\); Initialize the replay buffer \(\mathcal{M}\); Initialize the state \(s\) with target node \(v_{t}\) and targeted label \(y_{t}\) by random sampling; Initialize \(\bar{G}\); 2while\(epoch<K\)do 3 Empty replay buffer \(\mathcal{M}\); 4while\(step<S\)do 5 Compute state embedding according to Eq.(6), Eq.(7) and Eq.(8); 6 Compute probability distribution of actions according to Eq.(9); 7 Sample \(a\) from \(\pi_{\theta_{old}}(\cdot|s)\) by Gumbel-Max trick; 8 Compute reward \(r\) according to Eq.(5); 9 Store \((s,r,a,\pi_{\theta_{old}}(a|s))\) in \(\mathcal{M}\); 10if\(s_{next}\) is terminal statethen 11 Initialize \(s_{next}\) with new target node and targeted label by random sampling; 12 end if 13\(s\gets s_{next}\) 14 15 end while 16 Compute advantage function and the estimated return according to Eq.(11) and Eq.(12); 17while\(update\)\(times<P\)do 18 Sample minibatch \(\mathcal{B}\) randomly from \(\mathcal{M}\); 19 Compute loss function according to Eq.(18) and update parameter \(\theta\); 20\(\theta_{old}\leftarrow\theta\); 21 end while 22 23 end while return \(\pi_{\theta_{old}}\); ``` **Algorithm 1**the training algorithm of framework G\({}^{2}\)-SNIA ## V Experiments In this section, we proceed to experiments that compare G\({}^{2}\)-SNIA with several baseline methods for label specificity attacks in different attack setting. Our experiments are designed to answer the following research questions: * **(RQ1)** Compared with several baselines, can G\({}^{2}\)-SNIA effectively perform label specificity attack against well-trained GNNs? * **(RQ2)** Can G\({}^{2}\)-SNIA efficiently generate suboptimal malicious node features? * **(RQ3)** How does G\({}^{2}\)-SNIA perform in terms of attack effectiveness under different budgets without retraining? ### _Experimental Settings_ **Dataset**. In this work, we perform experiments on three well-known public datasets: Cora [46], Citeseer [47], and DBLP [48]. These three datasets are citation networks where nodes correspond to documents and edges represent citation links. Discrete node features are one-hot vectors which represent selected key words in corresponding documents. A comprehensive summary of these datasets is provided in Table II. Following the same experimental setup as in [30, 31, 39], we focus only on the largest connected component for convenience. The datasets are randomly partitioned into training (10%), validation (10%), and test (80%) sets. Additionally, we randomly select 1000 nodes from the testing set to create a target node set denoted as \(V_{tar}\) to evaluate the attack performance. To ensure consistency and eliminate performance variations due to different dataset splits in the black-box setting, the training, validation, and testing sets remain consistent while training both victim and surrogate models. It is important to note that this implies the baseline methods, which employ the surrogate model, have complete information about the datasets, which deviates from the typical black-box setting. **Victim GNNs**. For all datasets, we implement four well-known GNNs as vimtim models which includes GCN [2] and its Variants SGCN [49], TAGCN [50] and GCNII [51]. It is noteworthy that the surrogate model proposed in Nettack [19] is typically used to generate perturbations in the gray-box setting, so we chose the same model in [19, 30] as the surrogate model. The accuracy of all models on test sets are shown in Table III. **Baseline Methods**. As single node injection attack represents an emerging type of attack, first proposed in [30], only a few studies have focused on this topic, such as AFGSM [30], G-NIA [31], and GB-FGSM [39]. To demonstrate the effectiveness of our proposed method G\({}^{2}\)-SNIA, we design a random method and a greedy-based method called MostAttr as our baselines. We also compare G\({}^{2}\)-SNIA with the two state-of-the-art single node injection attack methods. * Random. We randomly sample a node from the set of nodes labeled as targeted label by the classifier on the original graph and take its feature vector as the features of the injected node to leverage aggregation process to attack the target node. * MostAttr. From the perspective of the node features, we employ a greedy-based method to generate feature vector of the malicious node. Given a targeted label, we first calculate the total non-zero appearance of features in each index among \(F\) dimensions of all nodes whose classification results determined by the victim model on the original graph belong to the targeted label. Subsequently, we choose the top attack budget \(\Delta_{f}\) feature indices with the highest appearance as the the non-zero indices of the injected node. * AFGSM [30]. AFGSM is a targeted node injection poison attack that calculates the approximation of the optimal closed-form solutions to generate malicious features for the injected nodes. It sets the discrete features with the largest attack budget \(\Delta_{f}\) approximate gradients to 1. In our attack scenario, we do not impose the constraints of co-occurrence pairs of features and improve the loss function to \(-\hat{\mathbf{Z}}_{e_{v},y_{t}}\) to maximize its attack ability as much as possible in the black-box evasion setting. It is worth noting that AFGSM has already outperformed existing attack methods (such as Nettack and Metattack) and exhibits performance very close to G-NIA in datasets with discrete features. Therefore, we only compare our proposed G\({}^{2}\)-SNIA with the state-of-the-art AFGSM. * GB-FGSM [39]. Inspired by the Fast Gradient Sign Method (FGSM) in computer vision, GB-FGSM is a single node injection backdoor attack that uses a greedy step-by-step rather than one-step optimization in the graph domain to modify the classification result of the target node to the targeted label without retraining. Each step, it only selects the most promising feature based on the maximized gradient among all features, and the iteration will continue until the budget \(\Delta_{f}\) is exhausted. **Parameter Settings**. To ensure a low attack cost, we strictly limit the attack budget \(\Delta_{f}\) to the maximum \(L_{0}\) norm of the feature vectors of the original nodes. We set the hyper-parameters in G\({}^{2}\)-SNIA as follows: the policy network is a 6-layer MLP and the value network is a 4-layer MLP, both with 512 hidden dimensions and the \(Tanh\) activation function. The discount factor \(\gamma\) is set to 0.99, the clip coefficient \(\epsilon\) to 0.1, the hyper-parameter \(\lambda\) to 0.95, the entropy coefficient \(\beta\) to 0.02, and the target steps \(S\) equal to the exploration steps of the single DRL agent in an environment multiplied by the number of parallel environments. The batch size \(|\mathcal{B}|\) is set to 512. We utilize the Adam optimizer with a learning rate of \(2\times 10^{-4}\) and linear decay. Additionally, we adopt early stopping with a patience of 20 evaluation epochs, with an evaluation epoch conducted after each 400 training epochs. All experiments are run on a server equipped with an Intel Xeon Gold 5218R CPU, Tesla A100, and 384GB RAM, running Linux CentOS 7.1. ### _Effectiveness Comparison_ To address **RQ1**, we first evaluate the performance of our proposed method, G\({}^{2}\)-SNIA, in comparison with four baseline methods in the black-box evasion setting. The classification results of the original graph, denoted as Clean, are also included as a lower bound. Table IV presents the attack success rate for different targeted labels. As the results in the table show, all methods, including Random, are capable of performing label specificity attack against the four victim GNNs. Random has a modest attack effect, indicating that nodes belonging to the targeted label can spread perturbations to the target node via the aggregation process of GNNs. MostAttr performs better than Random, which indicates the injected node has better attack performance than the node in the original graph even when generated based solely on statistics. Regarding the state-of-the-art attack baselines, both AFGSM and GB-FGSM demonstrate the best performance among all baselines, demonstrating that gradient-based intentional perturbations can effectively mislead classification results made by victim models for target nodes. Our proposed method either outperforms or performs on par with all baselines across attacks against all victim models. In all datasets and models, G\({}^{2}\)-SNIA approximately surpasses the best baseline by 5% averagely. In some cases, it even reaches about 20% improvement. These indicate that although AFGSM and GB-FGSM rely on the maximum gradient strategy to generate perturbations, their solutions might represent a good local optimum for surrogate models. However, when transferred to victim models, these strategies are insufficient. Without requiring gradient information of victim models, G\({}^{2}\)-SNIA accurately identifies the perturbations with the most significant impact on the results, thereby thoroughly surpassing the attack performance generated by transfer attack leveraging surrogate gradients. We also observe that different victim models exhibit varying sensitivities to perturbations. Generally, GCN and SGCN are more susceptible to perturbations, while GCNII exhibits greater robustness. For example, in the Cora dataset, G\({}^{2}\)-SNIA achieves a 75.99% average success rate on the GCN model but only 54.77% on GCNII. Meanwhile, We observe that GCNII outperforms GCNII with the most significant impact on the results, thereby improving the attack performance. Fig. 2: Changes in average targeted label confidence on all datasets and victim models. Darker color suggests larger performance promotion. note that some labels are more challenging to execute label specificity attack under the same conditions. For instance, in the Cora dataset and GCNII model, G\({}^{2}\)-SNIA achieves an 83.5% success rate for \(y_{t}=2\) but only 16.8% for \(y_{t}=5\). This discrepancy may be attributed to the distribution of node labels in the training set and the architectures of classification GNN models. Besides effectiveness comparison in the black-box setting, we also compare the solutions obtained by our method with the local optimal solutions generated by gradient-based baseline methods in the white-box setting. The results are presented in Table V. We observe that the baselines in the white-box setting exhibit similar performance and significantly outperform baselines in the black-box setting in most cases, which proves that the divergence between the surrogate architecture and the actual victim model results in attack performance decreasing. G\({}^{2}\)-SNIA outperforms AFGSM (white-box) in most cases and is slightly inferior to GB-FGSM (white-box), demonstrating that despite having limited information about the victim model, the solutions generated by our proposed method are comparable to the local optimal solutions obtained based on gradient-based greedy methods in the white-box setting, i.e., we are still able to obtain a good approximate solution for this NP-Hard problem. To further verify the performance of G\({}^{2}\)-SNIA, we compare the attack performance of GB-FGSM (white-box), GB-FGSM (black-box) and G\({}^{2}\)-SNIA by measuring the change in the probability of target nodes being classified as the targeted label by victim models before and after the attack, Fig. 4: The trend of the average probability of attacked nodes belonging to the targeted label under all datasets and victim models. Fig. 3: Change of average attack success rate with the amount of exploration. defined as \(\Delta\hat{\mathbf{Z}}_{v_{t},y_{t}}=\hat{\mathbf{Z}}_{v_{t},y_{t}}-\mathbf{Z}_{v_ {t},y_{t}}\). As shown in Fig. 2, the numbers on the x-axis represent the original classification labels of target nodes, i.e., the classification results of target nodes by victim models on the original graph and the red numbers on the y-axis represent the targeted labels. The value in each cell is denoted as \(\frac{1}{[M|L_{i}]}\sum_{M}\sum_{v\in L_{i}}\Delta\hat{\mathbf{Z}}_{v,j}\), where \(i\) is the original label, \(j\) is the targeted label, \(M\) is the set of victim models. The results in the figure show that G\({}^{2}\)-SNIA is almost as effective as the state-of-the-art white-box attack method and outperforms the baseline in the black-box setting. Interestingly, the data on the diagonal in the figure suggests that single node injection label specificity attack can also be utilized as a method to enhance node classification. ### _Efficiency Evaluation_ To answer **RQ2**, we instigate into the relationship between the exploration quantity and the average attack success rates during the training process of DRL agents. As shown in Fig. 3, each unit of exploration quantity on the x-axis is \(400\times\frac{S}{\Delta_{f}}\), with \(S\) representing the target steps of experience collection in each epoch. Furthermore, The minimal exploration quantity that G\({}^{2}\)-SNIA requires to surpass the best baseline is also pointed out in the figure. For instance, as shown in Fig 3(b), the red curve that represents the attack on the GCN model by G\({}^{2}\)-SNIA on the Citeseer dataset exceeds the best baseline (the point where the dotted lines intersect in the figure) after 4 evaluation epochs, where we set \(S\) to 4096, \(\Delta_{f}\) to 54, \(|V_{tar}|\) to 1000, and \(|\mathcal{Y}|\) to 6. It can be estimated that the average exploration quantity for attacking each node and each label is only \(\frac{4\times 400}{1000\times 6}\times\frac{4096}{54}\approx 21\), which is minimal compared to the entire search space of \(\left(\begin{smallmatrix}F\\ \Delta_{f}\end{smallmatrix}\right)=\left(\begin{smallmatrix}3327\\ 54\end{smallmatrix}\right)\). Additionally, we found that among these three datasets, GCN consistently requires the most exploration quantity to surpass the baseline method compared to other models. This observation implies that despite the baselines disrupting the model classification results through transfer attack, it is still capable of achieving satisfactory outcomes on the GCN model. In contrast, the other three models manage to match the performance of the baseline method with a smaller search effort. ### _Budget Analysis_ To answer **RQ3**, we document the average probability change of target nodes being classified into targeted labels by victim models under varying attack budgets, as depicted in Fig. 4. The x-axis represents the different attack budgets, while the y-axis denotes the average probability, and the red dotted line signifies the attack budget established in our previous experiments. We observe that the curves in the graph are almost monotonically increasing, implying that as the attack budget grows, the likelihood of target nodes being classified as targeted labels by victim models increases. According to the changes in the slope of the curve, we find that as the attack budget increases, the slope of the curve tends to decrease, indicating a gradual reduction in the average probability change. This phenomenon indicates that our method initially selects features with a greater impact, allowing the confidence of targeted labels to rise rapidly during the early stages of the attack process. Additionally, even though we set fixed attack budgets while training the DRL agents, the results in the graph show that all curves maintain monotonicity even when exceeding one times the attack budget without requiring retraining, which implies that DRL agents can still ensure the quality of their decisions. In most datasets and models, the curves of different targeted labels are relatively concentrated. However, the various curves exhibit distinct growth trends in Fig. 4(d), 4(h) and 4(k), warranting further investigation into the cause of these differences in our future work. ## VI Conclusion In this work, we investigate a gradient-free single-node injection label specificity attack for graphs in the black-box evasion setting. We propose G\({}^{2}\)-SNIA, a gradient-free, generalizable adversary that eliminates the risk of error propagation due to inaccurate approximations of the victim model. In contrast to other node injectors that require gradients from the surrogate model, G\({}^{2}\)-SNIA operates without any assumptions about victim models. We formulate the single node injection label specificity attack as an MDP and solve it using a reinforcement learning framework. Through extensive experiments on three widely recognized datasets and four diverse GNNs, we demonstrate the promising performance of G\({}^{2}\)-SNIA in comparison to state-of-the-art baselines in the black-box setting. To further showcase the effectiveness of G\({}^{2}\)-SNIA, we also compare its attack performance with baselines in the white-box evasion setting, demonstrating that even under black-box conditions, our proposed method can still find solutions that are on par with the baselines. Although G\({}^{2}\)-SNIA demonstrates effectiveness and efficiency, there remain several challenges to address in future work. From the perspective of adversaries, as multi-agent reinforcement learning continues to advance, it is crucial to investigate how cooperation and competition between multiple injection nodes can be employed to implement label specificity attack on numerous nodes, ultimately yielding more sophisticated and covert offensive measures. Conversely, from a defensive standpoint, it is imperative to ascertain the extent of G\({}^{2}\)-SNIA's effectiveness when applied to the robust GNNs, as well as to identify the salient architectural components of GNNs that can effectively withstand both GMA and GIA onslaughts. We will explore these issues in future research. ## Acknowledgments This work was supported in part by the Key R&D Program of Zhejiang under Grant 2022C01018 and 2021C01117, by the National Natural Science Foundation of China under Grants 61973273, 62103374 and U21B2001, by the National Key R&D Program of China under Grant 2020YFB1006104, and by The Major Key Project of PCL under Grants PCL2022A03, PCL2021A02, and PCL2021A09.
2301.00439
A plug-in graph neural network to boost temporal sensitivity in fMRI analysis
Learning-based methods have recently enabled performance leaps in analysis of high-dimensional functional MRI (fMRI) time series. Deep learning models that receive as input functional connectivity (FC) features among brain regions have been commonly adopted in the literature. However, many models focus on temporally static FC features across a scan, reducing sensitivity to dynamic features of brain activity. Here, we describe a plug-in graph neural network that can be flexibly integrated into a main learning-based fMRI model to boost its temporal sensitivity. Receiving brain regions as nodes and blood-oxygen-level-dependent (BOLD) signals as node inputs, the proposed GraphCorr method leverages a node embedder module based on a transformer encoder to capture temporally-windowed latent representations of BOLD signals. GraphCorr also leverages a lag filter module to account for delayed interactions across nodes by computing cross-correlation of windowed BOLD signals across a range of time lags. Information captured by the two modules is fused via a message passing algorithm executed on the graph, and enhanced node features are then computed at the output. These enhanced features are used to drive a subsequent learning-based model to analyze fMRI time series with elevated sensitivity. Comprehensive demonstrations on two public datasets indicate improved classification performance and interpretability for several state-of-the-art graphical and convolutional methods that employ GraphCorr-derived feature representations of fMRI time series as their input.
Irmak Sivgin, Hasan A. Bedel, Şaban Öztürk, Tolga Çukur
2023-01-01T16:38:12Z
http://arxiv.org/abs/2301.00439v1
# A plug-in graph neural network to boost temporal sensitivity in fMRI analysis ###### Abstract Learning-based methods have recently enabled performance leaps in analysis of high-dimensional functional MRI (fMRI) time series. Deep learning models that receive as input functional connectivity (FC) features among brain regions have been commonly adopted in the literature. However, many models focus on temporally static FC features across a scan, reducing sensitivity to dynamic features of brain activity. Here, we describe a plug-in graph neural network that can be flexibly integrated into a main learning-based fMRI model to boost its temporal sensitivity. Receiving brain regions as nodes and blood-oxygen-level-dependent (BOLD) signals as node inputs, the proposed GraphCorr method leverages a node embedder module based on a transformer encoder to capture temporally-windowed latent representations of BOLD signals. GraphCorr also leverages a lag filter module to account for delayed interactions across nodes by computing cross-correlation of windowed BOLD signals across a range of time lags. Information captured by the two modules is fused via a message passing algorithm executed on the graph, and enhanced node features are then computed at the output. These enhanced features are used to drive a subsequent learning-based model to analyze fMRI time series with elevated sensitivity. Comprehensive demonstrations on two public datasets indicate improved classification performance and interpretability for several state-of-the-art graphical and convolutional methods that employ GraphCorr-derived feature representations of fMRI time series as their input. functional MRI, time series, neural network, graph, classification, connectivity ## 1 Introduction The human brain comprises networks of regions that interactively process information during cognitive processing [1]. In turn, correlated activity within individual functional networks has been associated with unique mental states [2; 3]. Functional MRI (fMRI) is a powerful modality to examine functional networks as it can non-invasively measure whole-brain blood-oxygen-level-dependent (BOLD) signals consequent to neural activity at high spatio-temporal resolution [4; 5]. In fMRI studies, functional connectivity (FC) measures are used to assess similarity of BOLD signals among brain regions [6; 7; 8; 9; 10]. The traditional approach to map FC measures onto mental states is then based on conventional methods such as logistic regression and support vector machines (SVM) [11; 12; 13; 14]. Unfortunately, conventional methods are often insufficiently sensitive to the intricate information patterns in whole-brain fMRI time series [15]. In recent years, the success of deep learning (DL) models at exploring features in high-dimensional datasets has motivated their adoption for fMRI analysis as an alternative to conventional methods [16; 17; 18; 19; 20]. Earlier attempts in this domain have proposed shallow multi-layer perceptron (MLP) [21; 22] and Boltzmann machine (BM) models [23; 16]. Later studies have adopted deeper architectures based on convolutional neural network (CNN) [24; 17; 25], graph neural network (GNN) [26; 18; 27; 28; 29; 30], and transformer [31; 32; 33; 34; 35] models for improved performance. Typically, these models start by constructing construct a set of nodes corresponding to brain regions defined based on an anatomical or functional atlas [36; 37; 38], and receive input features at these nodes based on the FC strength among brain regions [39; 18]. A common approach has been to employ static FC features for derived from aggregate correlation measures across the entire duration of fMRI time series [40; 18]. Yet, this approach is insufficiently sensitive to the dynamic inter-regional interactions in the human brain during resting-state or cognitive tasks [41]. While alternative strategies have recently been proposed to assess the temporal variability in FC features, these methods commonly consider instantaneous signal correlations across local time windows within the time series [42; 43; 44]. As such, they do not possess explicit mechanisms to capture delayed correlations between brain regions that can be present in fMRI times series due to hierarchical cognitive processing in the brain or hemodynamic lags in BOLD measurements [45]. In this study, we introduce a plug-in graphical neural network, GraphCorr, that provides enhanced input features to learning-based fMRI models so as to boost their sensitivity to dynamic, lagged inter-regional interactions. To capture dynamic changes in interactions, GraphCorr leverages a novel node embedder module based on a transformer encoder that computes hierarchical embeddings of windowed BOLD signals across the time series. To capture lagged interactions between brain regions, GraphCorr employs a novel lag filter module that computes nonlinear features of cross-correlation between pairs of nodes across a range of time delays. The graph model is initialized with node features taken as embeddings from the node embedder module, and with edge weights taken as lag features from the lag filter module. Afterwards, a message passing algorithm is used to compute enhanced node embeddings that account for dynamic, lagged inter-regional interactions. Here, we demonstrate GraphCorr for gender classification from fMRI scans in two public datasets: the Human Connectome Project (HCP) dataset [46] and the ID1000 dataset from Amsterdam Open MRI Collection (AOMIC) [47]. GraphCorr is coupled as a plug-in to state-of-the-art baseline models for fMRI analysis including SAGE [48], BrainGNN [18], BrainNetCNN [17] and GCN [29]. Significantly enhanced performance is obtained from each baseline model when coupled with GraphCorr. We devise an explanatory analysis approach for GraphCorr to interpret the time frames and brain regions that most significantly contribute to classification decisions. We show that GraphCorr improves explainability of baseline models, resulting in interpretations that are more closely aligned with prominent neuroscientific findings from the literature. We also demonstrate the benefits of GraphCorr-derived features against features extracted via plug-in recurrent neural networks (RNN) and dynamic FC features computed directly from BOLD signals. ## 2 Related Work Cognitive processes elicit broadly distributed response patterns across the human brain [49]. In turn, process-related information such as stimulus or task variables can be decoded by analyzing resultant multi-variate BOLD signals [50; 51]. Initial studies in this domain employed relatively simpler, traditional machine learning (ML) methods for fMRI analysis [41; 52]. These traditional methods rely heavily on feature selection procedures to cope with the intrinsically high dimensionality of fMRI data [53]. Arguably, FC features among brain regions have been most commonly used to capture discriminative information about cognitive processes [21; 17; 39; 18]. Many studies have reported that external variables or disease states can be detected given FC features of individual subjects under resting state [54; 55], cognitive tasks [11; 18], or both [14]. Given their earlier success in fMRI analysis, FC features have also been pervasively adopted in recent DL methods that leverage more complex models to enhance performance [56; 57]. A pervasive approach in DL-based fMRI analysis relies on static FC features as model inputs, where FC between a pair of regions is taken as the aggregate correlation of their BOLD signals across the entire scan. To extract hierarchical latent representations of these features, earlier studies have proposed either relatively compact fully-connected architectures including BM and MLP models [23; 16; 21; 22], or computation-efficient deep architectures including CNN models [25; 17]. Later studies have considered GNN models given their natural fit to analyzing fMRI data that follows an intrinsic connectivity structure [58; 59; 60; 61; 18]. These DL methods have all enabled substantial performance improvements in fMRI analysis over traditional methods. Yet, analyses rooted in static FC features can still yield suboptimal sensitivity to fine-grained temporal information across the fMRI time series [12; 25]. To improve temporal sensitivity, several alternative strategies have been proposed that incorporate time-varying features to DL models for fMRI analysis. A first group of methods pre-compute FC features over moving windows across the time series based on standard correlation measures, and concatenate them across windows to form a higher-dimensional input [62; 63; 28; 43; 44]. While these dynamic FC features promise enhanced temporal sensitivity, they result in elevated complexity due to their intrinsic dimensionality that can degrade model performance. A second group of methods instead provide voxel-level BOLD signals spatially encoded via a CNN module. Spatially-encoded BOLD signals are then processed with RNN or transformer models to extract the time-varying information [32; 42]. Yet, CNN modules based on voxel-level inputs can be difficult to train from scratch under limited data regimes. A third group of methods retain static FC features as their input, albeit augment them with dynamic features captured by RNN modules that directly encode BOLD signals [42]. Besides elevated model complexity, these methods can suffer from intrinsic limitations of RNNs in terms of vanishing/exploding gradients over the extensive number of time steps in typical fMRI scans [64; 65]. Importantly, a common attribute of these previous approaches is that they primarily consider the temporal variation in instantaneous correlations among brain regions. However, they can elicit suboptimal sensitivity as they lack explicit mechanisms to capture delayed inter-regional interactions that can occur due to hierarchical processing or hemodynamic delays [45]. Here, we propose to improve the temporal sensitivity of downstream fMRI analysis models by integrating a novel plug-in GNN, GraphCorr. The proposed GraphCorr method uses a novel node embedder module to find contextual embeddings of dynamic FC features based on instantaneous correlations of windowed BOLD signals; and it uses a novel lag filter module to compute embeddings of cross-correlation features of windowed BOLD signals across various time delays. Following a message passing algorithm across the graph, GraphCorr provides enhanced input features that preserve dynamic, delayed correlations among brain regions to a downstream analysis model so as to improve its performance. Unlike methods based on static FC features [17; 18], GraphCorr leverages dynamic FC features to capture the variability in connectivity among brain regions. Unlike methods that receive multiple sets of dynamic FC features across separate time windows [66; 42; 28], GraphCorr fuses its node features across time windows to lower model complexity without sacrificing performance. Unlike methods that employ recurrent architectures that involve sequential processing [42], GraphCorr leverages a transformer encoder on dynamic FC features that enables efficient parallel processing. Unlike methods that solely focus on instantaneous signal correlations [17], GraphCorr adopts an explicit lag filter mechanism to learn delayed cross-correlations among brain regions. ## 3 GraphCorr Analysis procedures for fMRI time series typically start by defining a collection of \(R\) regions of interest (ROI) across the brain based on an anatomical atlas [18; 42]. Voxel-level BOLD signals within each ROI are then averaged to derive ROI-level signals, resulting in \(\mathbf{B}\in\mathbb{R}^{R\times T}\) as the matrix of BOLD signals where \(T\) denotes the number of time frames. Static FC features are conventionally computed based on Pearson's correlation coefficient of these BOLD signals across ROIs: \(\mathbf{sFC}_{i,j}=\text{Corr}(\mathbf{B}_{i,\cdot},\mathbf{B}_{j,\cdot})\), where \(\mathbf{sFC}\in\mathbb{R}^{R\times R}\) and \(i,j\) are ROI indices. Many previous traditional and learning-based methods use downstream classification models on static FC features, which result in suboptimal temporal sensitivity. Instead, here we propose to extract dynamic FC features of BOLD signals based on a novel GNN plug-in, and to use these enhanced features to improve the performance of downstream classification models. The proposed GraphCorr method forms a graph structure to represent brain connectivity features, leverages node embedder and lag filter modules to capture dynamic, lagged correlations in BOLD signals, and finally performs message passing on the graph to compute enhanced features (Fig. 1). The methodological components and procedures in GraphCorr are described below. ### Graph formation As a learning substrate, GraphCorr first forms a graph \(G(N,E)\) with \(N\) and \(E\) denoting nodes and edges, respectively. The node set \(N=\{r_{i}\,|\,i=1,...,R\}\) includes ROIs defined according to the atlas, whereas the binary edge set is given as \(E=\{e_{i,j}=1\,|\,i=1,...,R;j\in\mathcal{N}(i)\}\) where \(\mathcal{N}(i)\) is the neighborhood of the \(i\)-th node. Edges are defined by thresholding to retain the strongest \(z\%\) of correlation coefficients in \(\mathbf{sFC}\) while excluding self connections, resulting in \(E_{n}\) number of edges. The node features \(\mathbf{F}=\{f_{i}\,|\,i=1,...,R\}\) are initialized as the time-windowed BOLD signals at each corresponding node to capture local dynamics in the fMRI times series. For this purpose, the scan containing \(T\) time frames is split into \(W\) windows of size \(T_{w}\) and stride value \(s\): \[W=\lfloor\frac{T-T_{w}}{s}\rfloor, \tag{1}\] resulting in a feature tensor of \(\mathbf{F}\in\mathbb{R}^{R\times T_{w}\times W}\). ### Architecture To sensitively extract dynamic connectivity features, GraphCorr utilizes a novel node embedding module. To also capture delayed connectivity features, GraphCorr utilizes a novel lag filter module. The two modules are detailed below. **Node embedder module:** Receiving as input time-windowed BOLD signals, this module computes latent representations of dynamic FC features (Fig. 2). First, dynamic FC features are extracted from the time-windowed BOLD signals as: \(\mathbf{FC}_{i,j,w}=\text{Corr}(\mathbf{F}_{i,\cdot,w},\mathbf{F}_{j,\cdot,w})\) where \(w\in\{1,...,W\}\) indicates window index, \(i\in\{1,...,R\}\), \(j\in\{1,...,R\}\) denote node indices. These FC features are then processed with a transformer encoder where windows across the fMRI time series correspond to the sequence of transformer tokens. Attention calculations are performed on window-specific keys \(K_{w}\in\mathbb{R}^{R\times d}\), queries \(Q_{w}\in\mathbb{R}^{R\times d}\) and values \(V_{w}\in\mathbb{R}^{R\times d}\) derived via learnable linear projections \(f_{q},f_{k}\) and \(f_{\cdot}\): \[Q_{w} =U_{q}(|\mathbf{FC}_{1,\cdot,w},\mathbf{FC}_{2,\cdot,w},..., \mathbf{FC}_{\mathcal{K},w}|),\] \[K_{w} =U_{k}(|\mathbf{FC}_{1,\cdot,w},\mathbf{FC}_{2,\cdot,w},..., \mathbf{FC}_{\mathcal{K},w}|),\] \[V_{w} =U_{\tau}(|\mathbf{FC}_{1,\cdot,w},\mathbf{FC}_{2,\cdot,w},..., \mathbf{FC}_{\mathcal{K},w}|), \tag{2}\] where \(d\) is the dimensionality of each attention head. The above computations can be performed separately for \(H\) attention heads. The window-specific attention matrix \(\mathbf{A}_{w}\in\mathbb{R}^{R\times R}\) is then derived as [67]: \[\mathbf{A}_{w}=\text{Att}(Q_{w},K_{w},V_{w})=\text{Softmax}(\frac{Q_{w}K_{w}^{ \intercal}}{\sqrt{d}})V_{w}. \tag{3}\] Attention matrices are concatenated across attention heads and propagated to an MLP block following layer normalization: \[\mathbf{EMB}_{w}=\text{MLP}(\mathbf{A}_{w})=\text{GELU}(\mathbf{A}_{w}\mathbf{ M}_{1})\mathbf{M}_{2} \tag{4}\] where \(\mathbf{M}_{1}\in\mathbb{R}^{(R\times D)}\) and \(\mathbf{M}_{2}\in\mathbb{R}^{(D\times D)}\) denote MLP model parameters, GELU is the Gaussian activation unit, and \(\mathbf{EMB}\in\mathbb{R}^{R\times D\times W}\) are window-specific node embeddings, and \(D\) is the embedding dimensionality with \(D<R\). **Lag filter module:** Receiving as input time-windowed BOLD signals, this module computes cross-correlation features across a range of temporal delays (Fig. 3). For this purpose, initial node features from the graph formation stage are zero-padded across the time dimension: \[\mathbf{X}_{i,w}=[\mathbf{0}_{(1\times m)},\mathbf{F}_{i,\cdot,w},\mathbf{0}_{ (1\times m)}]. \tag{5}\] where \(\mathbf{X}\in\mathbb{R}^{R\times(T_{w}+2m)\times W}\), and \(m\) defines the range of delays \(\tau\in\{-m,-m+1,...,m-1,m\}\) that will be considered in the module. First, cross-correlations are computed between pairs of nodes connected by \(e_{ij}\) at each lag value separately: \[\rho_{i,j,w,\tau}=\text{Corr}(\mathbf{X}_{i,\cdot,w},\mathbf{X}_{j,\cdot,w},\tau) \tag{6}\] Afterwards, learnable lag filters \(\mathbf{P}_{LF}\in\mathbb{R}^{(2m+1)\times k}\) with \(k\) denoting the number of filters are used to map cross-correlations onto lag activations: \[\mathbf{LAG}_{w}=\text{GELU}(\rho_{\cdot,\cdot,w},\mathbf{P}_{LF}) \tag{7}\] where \(\mathbf{LAG}\in\mathbb{R}^{E_{w}\times W\times d}\) are window-specific lag activations. ### Graph learning The node embedder produces time-windowed node embeddings, **EMB**, that reflect instantaneous inter-regional correlations. The lag filter produces time-windowed lag activations, **LAG**, that reflect reflect delayed inter-regional correlations. To consolidate these feature sets on the graph, node embeddings are taken as node features and lag activations are taken as edge weights (Fig. 1). A message passing algorithm is then run on the graph to compute enhanced FC features. To do this, a message tensor \(\textbf{MES}\in\mathbb{R}^{R_{\textit{c}}(D\times k)\times W}\) is computed between pairs of connected nodes \((r_{i},r_{j})\) as: \[\textbf{MES}_{i,j,w}=\textbf{EMB}_{j,w}\textbf{LAG}_{i,j,w}^{\intercal} \tag{8}\] Messages are first averaged across windows, and then propagated to a target node \(r_{i}\) that sums all messages from one-hop vicinity nodes \(j\in\mathcal{N}(i)\): \[\textbf{AGG}_{i}=\sum_{j\in\mathcal{N}(i)}\frac{1}{W}\sum_{w=1}^{W}\textbf{MES }_{i,j,w}. \tag{9}\] Figure 1: Overview of GraphCorr. **A.** GraphCorr utilizes two parallel modules to extract dynamic, lagged features of inter-regional correlations across the brain. The node embedder module receives as input time-windowed BOLD signals, and uses a transformer encoder to compute node embeddings of dynamic FC features \(\textbf{EMB}\in\mathbb{R}^{R\times D\times W}\). The lag filter module also receives as input time-windowed BOLD signals, and it computes lag activations due to cross-correlation across a range of lag values \(\textbf{LAG}\in\mathbb{R}^{R_{\textit{c}}\times W\times W}\). Cross-correlation is calculated only for connected node pairs \((e_{i,j}=1)\). **B.** To consolidate the extracted feature sets on a graph, node embeddings are taken as node features and lag activations are taken as edge weights. A message passing algorithm is then run on the graph to produce enhanced FC features in an output feature matrix, \(\textbf{OUT}\in\mathbb{R}^{R\times D\times(D\times k+1)}\). This aggregate message is the concatenated with the window-averaged node embedding at \(r_{i}\): \[\mathbf{OUT}_{i}=[\frac{1}{W}\sum_{v=1}^{W}\mathbf{EMB}_{i_{v},w},\mathbf{AGG}_{ i}], \tag{10}\] where \(\mathbf{OUT}_{i}\in\mathbb{R}^{D\times(k+1)}\) denotes enhanced features for \(r_{i}\). ## 4 Methods ### Experimental procedures Demonstrations were performed on fMRI data from the HCP S1200 release1[46] and ID1000 dataset from Amsterdam Open MRI Collection (AOMIC)2[47]. In the HCP dataset, preprocessed data from resting-state fMRI scans were analyzed. The first resting-state scan among four sessions was selected for each subject, excluding short scans with \(T<1200\). This resulted in a total of 1093 healthy subjects (594 female and 499 male). In the ID1000 dataset, preprocessed data from task-based fMRI scans recorded during movie watching were analyzed. All scans had a fixed duration of \(T=240\). A total of 881 healthy subjects were examined (458 female and 423 male). For both datasets, two alternative ROI definitions were considered, based on either the Schaefer atlas [68] or the AAL atlas [36]. The Schaefer atlas includes \(R=400\) ROIs within 7 intrinsic networks, whereas the AAL atlas defines \(R=116\) ROIs. Footnote 1: [https://db.humanconnectome.org](https://db.humanconnectome.org) Experiments were conducted on a single NVIDIA Titan Xp GPU using the PyTorch framework. A nested cross-validation procedure was performed with 5 outer and 1 inner folds Domain adaptation procedures can be employ to improve reliability [69]. Data were three-way split into a training set (70%), a validation set (10%) and a test set (20%) with no subject overlap between the sets. For fair comparison, all models were trained, validated and tested on identical data splits. All models were trained based on cross-entropy loss. For each model, hyperparameters were selected to maximize the average performance across the validation sets. A common set of hyperparameters that were observed to yield near-optimal performance were used across datasets and atlases [70]. Details regarding model implementations are discussed in Section 4.2. ### Comparative analysis GraphCorr was demonstrated on several learning-based methods taken as downstream classification models including SAGE [48], GCN [29], BrainGNN [18], and BrainNetCNN [17]. For each method, a vanilla downstream model was trained by providing static FC features as model input, and an augmented downstream model was separately trained where GraphCorr was employed as a plug in to provide model input. Vanilla and augmented models were obtained with identical training procedures. In all graph models, ROIs in a given brain atlas were taken as nodes, and edge selection was then performed based on correlations of BOLD signals. Edges whose correlation coefficients were in the top \(z=2\%\) were retained, while remaining edges were discarded. The implementation details of the downstream models and GraphCorr are discussed below. **SAGE:** A GNN model was built based on a module with graph convolution pooling, and fully-connected layers [48]. SAGE comprised a cascade of two graphical modules with hidden dimension of 250 and dropout rate of 0.5. Cross-validated hyperparameters were a learning rate of \(3\times 10^{-3}\), 20 epochs, a batch size of 12. **GEN:** GCN is a GNN model based on graph convolution, pooling and fully-connected layers [29]. GCN comprised a cascade of two graphical modules with hidden dimension of 100 and dropout rate of 0.5. Cross-validated hyperparameters were a learning rate of \(5\times 10^{-3}\), 30 epochs, a batch size of 12. **BrainGNN:** BrainGNN is a GNN model based on ROI-aware graph convolution, pooling and fully-connected layers [18]. A single graphical module with hidden dimension of 100 and dropout rate of 0.5 was used. Cross-validated hyperparameters were a learning rate of \(8\times 10^{-4}\), 80 epochs, a batch size of 16. **BrainNetCNN:** BrainNetCNN is a CNN model based on convolutional layers with edge-to-edge and edge-to-node filters [17]. The convolutional layers had hidden dimension of 32 and dropout rate of 0.1. Vanilla BrainNetCNN expects a 2D input of size \(R\times R\) taken as the static FC matrix. When it was augmented with GraphCorr, its input dimensionality was modified as \(R\times D(k+1)\) for compatibility. Cross-validated hyperparameters were a learning rate of \(2\times 10^{-4}\), 20 epochs, a batch size of 16. Figure 2: The node embedder module. The time-windowed FC features \(\mathbf{FC}_{\cdot,w}\in\mathbb{R}^{R\times R}\) at window \(w\) are processed with a transformer encoder with multi-head self-attention (MHSA), layer normalization, and multi-layer perceptron (MLP) layers. The output is a node embedding matrix \(\mathbf{EMB}_{\cdot,w}\in\mathbb{R}^{R\times D}\) where \(D<R\) denotes the embedding dimensionality. **GraphCorr:** The node embedder module was built with a single-layer transformer encoder. Because the scan durations differed across HCP and ID1000, dataset-specific \(T_{w}\) (window size) and \(s\) (stride) were selected while common \(m\) (maximum lag) and \(k\) (filter count) were used. Accordingly, cross-validated parameters were (\(T_{w}\)=50, \(s\)=30, \(m\)=5, \(k\)=3) for HCP, (\(T_{w}\)=40, \(s\)=15, \(m\)=5, \(k\)=3) for ID1000. ### Explanatory analysis To assess the influence of GraphCorr on interpretability, the vanilla and augmented versions of trained downstream models were examined. An explanation procedure was devised to identify the brain regions within the fMRI times series that most saliently contribute to the model decisions. First, a gradient-based approach was used to compute a saliency tensor summarizing inter-regional interactions [71; 39]. For vanilla models, gradients were computed with respect to static FC features **sFC**: \[\mathbf{SAL}_{i,j}^{\text{sum}}=|\nabla_{\mathbf{sFC}_{i,j}}y_{\text{sum}}| \tag{11}\] where \(\mathbf{SAL}^{\text{sum}}\in\mathbb{R}^{Re\times R}\) and \(y_{\text{sum}}\) denotes the model prediction. For augmented models, gradients were computed with respect to time-windowed FC features \(\mathbf{FC}_{i,j\omega}\): \[\mathbf{SAL}_{i,j\omega}^{\text{ang}}=|\nabla_{\mathbf{FC}_{i,j\omega}}y_{ \text{sum}}| \tag{12}\] where \(\mathbf{SAL}^{\text{ang}}\in\mathbb{R}^{Re\times R\times W}\) and \(y_{\text{ang}}\) denotes the model prediction. Afterwards, an ROI-specific saliency score was computed by aggregating values across windows and interacting ROI dimensions of the saliency tensor: \[\mathbf{rSAL}^{\text{sum}}=\sum_{j=1}^{R}\mathbf{SAL}_{\cdot,j}^{\text{sum}} \tag{13}\] \[\mathbf{rSAL}^{\text{ang}}=\sum_{j=1}^{R}(\frac{1}{W}\sum_{w=1}^{W}\mathbf{SAL }_{\cdot,j\omega}^{\text{ang}}) \tag{14}\] where \(\mathbf{rSAL}^{\text{sum},\text{ang}}\in\mathbb{R}^{R}\). For saliency assessment at the level of functional brain networks, the seven intrinsic brain networks defined within the Schaefer atlas were used. For each network, ROI-specific saliency scores were averaged across the regions within the network to obtain a network saliency score per hemisphere. While unsigned ROI-specific saliency scores reflect the relative importance of each region on the model decision, they do not indicate whether the model output is driven by an increase or decrease in BOLD signals within the ROI. To address this question, a post-hoc logistic regression analysis was conducted. First, important windows in the fMRI time series were determined by aggregating values in the saliency tensor across ROI dimensions: \[\mathbf{wSAL}^{\text{ang}}=\sum_{i=1}^{R}\sum_{j=1}^{R}\mathbf{SAL }_{i,j,\cdot}^{\text{ang}} \tag{15}\] \[w^{*}=\underset{w}{\arg\max}\ \mathbf{wSAL}^{\text{ang}} \tag{16}\] Here, \(\mathbf{wSAL}^{\text{ang}}\in\mathbb{R}^{W}\) denotes the window-specific saliency score used for important window selection. BOLD signals within the most important window were extracted, and thresholded according to intensity to select the top 5 time frames [72; 73; 35]. A logistic regression model was then fit to map the BOLD signal vector across ROIs onto the output class, i.e., performing the same task as the downstream model. The logistic model returns a weight for each ROI: a positive weight indicates that an increase whereas a negative weight indicates that a decrease in the ROI's BOLD signal elicits the downstream model's decision. \begin{table} \begin{tabular}{l l l l l l} \hline \hline \multirow{2}{*}{Model} & \multicolumn{3}{c}{HCP} & \multicolumn{2}{c}{ID1000} \\ \cline{3-6} & & Acc (\%) & ROC (\%) & Acc (\%) & ROC (\%) \\ \hline \multirow{2}{*}{SAGE} & Vanilla & 75.2 \(\pm\) 2.84 & 85.29 \(\pm\) 1.67 & 62.39 \(\pm\) 2.17 & 68.59 \(\pm\) 3.76 \\ & Augmented & **89.57**\(\pm\) 0.68 & **94.27**\(\pm\) 1.98 & **81.7**\(\pm\) 2.67 & **87.02**\(\pm\) 2.10 \\ \hline \multirow{2}{*}{GCN} & Vanilla & 79.14 \(\pm\) 2.93 & 86.00 \(\pm\) 1.47 & 67.84 \(\pm\) 2.95 & 71.78 \(\pm\) 3.88 \\ & Augmented & **89.94**\(\pm\) 2.18 & **94.52**\(\pm\) 1.61 & **80.80**\(\pm\) 0.98 & **87.90**\(\pm\) 1.75 \\ \hline \multirow{2}{*}{BrainGNN} & Vanilla & 72.83 \(\pm\) 1.98 & 78.85 \(\pm\) 2.45 & 62.50 \(\pm\) 1.80 & 65.63 \(\pm\) 2.81 \\ & Augmented & **84.72**\(\pm\) 1.33 & **92.97**\(\pm\) 0.98 & **79.32**\(\pm\) 1.96 & **88.10**\(\pm\) 2.19 \\ \hline \multirow{2}{*}{BrainNetCNN} & Vanilla & 82.52 \(\pm\) 2.80 & 91.23 \(\pm\) 1.21 & 75.45 \(\pm\) 2.01 & 83.65 \(\pm\) 2.05 \\ & Augmented & **88.47**\(\pm\) 2.63 & **94.71**\(\pm\) 1.86 & **82.73**\(\pm\) 1.63 & **89.85**\(\pm\) 2.24 \\ \hline \hline \end{tabular} \end{table} Table 1: Performance of downstream models on the HCP and ID1000 datasets with the Schaefer atlas. Results are listed as mean+std across test folds for vanilla and GraphCorr-augmented versions. Boldface indicates the better performing version of each model. Figure 3: The lag filter module. Cross-correlation of time-windowed BOLD signals at window \(w\) are computed for delays \(\tau\in\{-m,-m+1,...,m-1,m\}\), where \(m\) defines the range. This computation is only performed for pairs of connected nodes (\(e_{i,j}=1\)). Afterwards, cross-correlation values \(\rho_{\cdot,\cdot,\cdot,\cdot}\in\mathbb{R}^{Re\times(2m+1)}\) are linearly transformed onto with a learnable filter \(M_{LF}\in\mathbb{R}^{(2m+1)\times 4}\) onto window-specific lag activations \(\mathbf{LAG}_{\cdot,\cdot,w}\in\mathbb{R}^{Re\times R\times R}\). ## 5 Results ### Comparative analysis GraphCorr was demonstrated on downstream classification models based on SAGE [48], GCN [29], BrainGNN [18], and BrainNetCNN [17]. A gender detection task was performed given resting-state fMRI scans in individual subjects. Performances of vanilla and augmented versions of downstream models on HCP and ID1000 datasets are listed in Table 1 for the Schaefer atlas, and in Table 2 for the AAL atlas. In all examined cases, augmentation with GraphCorr significantly enhances the performance of downstream models (p\(<\)0.05, Wilcoxon signed-rank test). When ROIs are defined via the Schaefer atlas, GraphCorr enables (accuracy, ROC)% improvements of (14.37, 8.98)% for SAGE, (10.80, 8.52)% for GCN, (11.89, 14.12)% for BrainGNN, and (5.95, 3.48)% for BrainNetCNN on HCP; and it enables improvements of (19.31, 18.43)% for SAGE, (13.32, 16.12)% for GCN, (16.82, 22.47)% for BrainGNN, and (7.28, 6.20)% for BrainNetCNN on ID 1000. When ROIs are defined via the AAL atlas, GraphCorr enables improvements of (17.19, 15.54)% for SAGE, (14.46, 13.14)% for GCN, (14.91, 17.53)% for BrainGNN, and (15.83, 16.49)% for BrainNetCNN on HCP; and it enable improvements of (14.66, 17.69)% for SAGE, (13.98, 14.47)% for GCN, (12.84, 16.72)% for BrainGNN, and (3.98, 4.72)% for BrainNetCNN on ID1000. We observe that for vanilla versions of the relatively simpler GNN models perform poorly against the more complex BrainNetCNN model. However, GraphCorr-augmented versions of these GNN models start outperforming the augmented BrainNetCNN. Thus, our results suggest that the feature extraction capabilities of vanilla GNN models might be suboptimal in comparison to CNN-based architectures, albeit a powerful feature extractor on the input side can mitigate this deficit in favor of GNN models. ### Explanatory analysis To assess the influence of GraphCorr on interpretability, an explanatory analysis was conducted separately on the trained vanilla and augmented downstream models. For this analysis, the HCP dataset and the Schaefer atlas were selected that have been broadly studied in the literature for intrinsic brain networks during resting state [74]. First, network saliency scores obtained in each hemisphere were compared between vanilla and augmented versions of SAGE, which generally maintains the highest performance after GraphCorr augmentation. Literature reports that BOLD signals across the sensorimotor network (SMN), the default mode network (DMN) and the visual network bilaterally across the two hemispheres carry discriminative information on subject gender [75; 39]. Accordingly, we reasoned that a successful downstream classification model for gender detection should focus on these networks. Vanilla SAGE shows somewhat heterogeneous results with significant salience in the DMN in the left hemipshere (LH); the attention network, the SMN, and the visual network in the right hemisphere (RH); but unexpectedly it also yields strong salience in the limbic network in both hemispheres (\(p<0.05\), Wilcoxon signed-rank test) although it is not considered to carry information on gender. In contrast, augmented SAGE shows significant salience across the SMN, the DMN and the attention network in the RH; the visual network in both hemispheres (\(p<0.05\)), without any salience in the limbic network. These results imply that GraphCorr helps improve interpretability of the downstream classification model by allowing it to focus on brain regions that carry task-relevant information. Significant saliency in a network indicates that BOLD signals across that network carry information about subject gender. Yet, it does not explain whether an increase or decrease in BOLD signals is evoked for individual genders. To address this question, a logistic regression analysis was conducted on BOLD signals extracted from the most important time window determined according to ROI saliency scores. Specifically, a logistic regression model was fit to detect subject gender given important BOLD signals. Fig. 4 illustrates the ROI weights in the logistic model, where a positive weight indicates that elevated BOLD signals in the ROI are associated with female subjects, and a negative weight indicates that elevated BOLD signals in the ROI are associated with male subjects. Accordingly, ROI weights were inspected for the logistic regression analyses based on the augmented SAGE model. In females, elevated BOLD signals are identified in RH parietal DMN areas including posterior cingulate cortex (PCC), RH prefrontal DMN areas, LH prefrontal control areas and LH-RH SMN areas. In males, elevated BOLD signals are identified in LH prefrontal DMN areas, LH-RH dorsal attention areas (DAN), LH parietal and prefrontal control areas, and RH parietal and extrastriate visual areas. These findings are consistent with evidence that females have relatively higher activations across DMN areas including PCC, and that males have relatively higher activations in visual and attentional areas [76; 77]. Our results are also consistent with recent studies suggesting that SMN and prefrontal regions show discriminate activation patterns across the two genders [39]. ### Ablation studies Ablation studies were performed to assess the contribution of the individual design elements in GraphCorr to model performance. These analyses were conducted based on the SAGE model using the HCP dataset and the Schaefer atlas, i.e., the \begin{table} \begin{tabular}{l l l l l l} \hline \hline \multirow{2}{*}{Model} & \multicolumn{3}{c}{HCP} & \multicolumn{3}{c}{ID1000} \\ \cline{3-5} & \multicolumn{1}{c}{Acc (\%)} & \multicolumn{1}{c}{ROC (\%)} & \multicolumn{1}{c}{Acc (\%)} & \multicolumn{1}{c}{ROC (\%)} \\ \hline \multirow{2}{*}{SAGE} & Vanilla & 68.26 \(\pm\) 3.31 & 75.65 \(\pm\) 1.29 & 62.84 \(\pm\) 1.81 & 67.23 \(\pm\) 2.71 \\ & Augmented & **85.45**\(\pm\) 3.57 & **91.19**\(\pm\) 2.63 & **77.50**\(\pm\) 3.68 & **84.92**\(\pm\) 1.92 \\ \hline \multirow{2}{*}{GCN} & Vanilla & 69.90 \(\pm\) 1.35 & 75.85 \(\pm\) 1.06 & 65.45 \(\pm\) 1.36 & 70.96 \(\pm\) 1.23 \\ & Augmented & **84.36**\(\pm\) 3.17 & **88.99**\(\pm\) 2.83 & **79.43**\(\pm\) 3.45 & **85.43**\(\pm\) 2.49 \\ \hline \multirow{2}{*}{BrainGNN} & Vanilla & 65.69 \(\pm\) 3.00 & 71.79 \(\pm\) 2.97 & 62.27 \(\pm\) 3.44 & 66.42 \(\pm\) 4.24 \\ & Augmented & **80.60**\(\pm\) 2.67 & **89.32**\(\pm\) 2.67 & **75.11**\(\pm\) 0.56 & **83.14**\(\pm\) 1.41 \\ \hline \multirow{2}{*}{BrainNetCNN} & Vanilla & 68.16 \(\pm\) 3.53 & 74.76 \(\pm\) 2.08 & 75.00 \(\pm\) 2.19 & 81.44 \(\pm\) 3.27 \\ & Augmented & **83.99**\(\pm\) 2.92 & **91.25**\(\pm\) 2.65 & **78.98**\(\pm\) 1.83 & **86.16**\(\pm\) 0.84 \\ \hline \hline \end{tabular} \end{table} Table 2: Performance of downstream models on the HCP and ID1000 datasets with the AAL atlas. Results are listed as mean\(\pm\)std across test folds for vanilla and GraphCorr-augmented versions. Boldface indicates the better performing version of each model. setting that yields the highest overall performance for gender detection. First, we assessed contributions of the node embedder module, lag filter module, and time windowing in GraphCorr. To ablate the node embedder module, node embeddings prior to message passing were initialized with the unlearned time-windowed FC matrix derived via conventional correlation measures on BOLD signals. To ablated the lag filter module, a single filter at zero lag was used within the module to consider only instantaneous correlations. To ablated time windowing, the entire fMRI time series was provided to GraphCorr with a single window of size equal to scan duration. Table 3 lists performance metrics for ablated variants of GraphCorr. We find that the node embedder module, the lag filter module and time windowing enable (accuracy, ROC)% improvements of \((5.31,3.2)\)%, \((0.65,0.17)\)%, and \((8.41,5.61)\)%, respectively. Next, we assessed the benefits of GraphCorr over alternative plug-in approaches to improve the temporal sensitivity of downstream model. In particular, we considered providing the downstream model pre-computed dynamic FC features across time windows via conventional correlation measures [66], an RNN model based on LSTM layers [78], and an RNN model based on GRU layers [79]. The feature dimensionality at the output of all plug-in models were identical to that for GraphCorr. Table 4 lists performance metrics for different plug-in methods. GraphCorr outperforms all other plug-in methods, with \((5.77,2.15)\)% higher performance than the top-contending GRU method. ## 6 Discussion Here we reported a novel plug-in GNN method, GraphCorr, to improve the performance of downstream classification models in fMRI analysis by capturing dynamic, lagged FC features of BOLD signals. Demonstrations were provided on two large-scale resting-state fMRI datasets, where substantially improved performance was achieved following model augmentation with GraphCorr. The proposed method can be trivially combined with classification models to detect other categorical variables related to cognitive task or disease [18; 17]. Alternatively, it can be employed as a plug-in to downstream regression models to boost sensitivity in predicting continuous variables related to stimulus or task features [1]. A mainstream approach in neuroimaging studies rests on prediction of experimental variables typically related to stimulus or task from BOLD signals [41; 51]. Here we adopted this approach to build decoding models that predict subject gender from resting-state fMRI scans. An alternative procedure to examine cortical function rests on encoding models that instead predict BOLD signals from experimental variables [45; 80; 81]. It may be possible to adopt GraphCorr to improve sensitivity of such downstream encoding models. In this case, GraphCorr would receive as input the time course of experimental variables during an fMRI scan. In turn, it would learn dynamic, lagged correlations among experimental variables to better account for their distribution. Learned correlations might help improve performance of downstream regression models that aim to predict measured BOLD signals. Future work is warranted to investigate the potential of GraphCorr in building encoding models for fMRI. In conjunction with downstream models, GraphCorr was directly trained end-to-end on the HCP or ID1000 datasets that contained data from several hundred subjects. While the lag filter module has low complexity, the node embedder module uses a transformer encoder with a relatively large number of parameters. To improve learning on limited datasets, transfer learning can be performed where the encoder is initialized with pre-trained weights [82]. Data augmentation procedures that can produce a large variety of realistic samples from a learned distribution might further facilitate learning [83; 84]. GraphCorr forms an initial graph where edges are retained in a single-hop neighborhood based on static FC values between corresponding \begin{table} \begin{tabular}{c c c} \hline \hline Plug-in & Accuracy (\%) & ROC (\%) \\ \hline Dynamic FC & \(81.06\pm 2.94\) & \(89.04\pm 1.53\) \\ LSTM & \(83.26\pm 2.05\) & \(90.29\pm 1.40\) \\ GRU & \(83.80\pm 2.29\) & \(92.12\pm 2.01\) \\ GraphCorr & \(\textbf{89.57}\pm 0.68\) & \(\textbf{94.27}\pm 1.98\) \\ \hline \hline \end{tabular} \end{table} Table 4: Performance of competing plug-in methods on the HCP dataset with the Schaefer atlas. Results are listed as mean\(\pm\)std across test folds for the downstream SAGE model. Boldface indicates top-performing plug-in. Figure 4: Salient ROIs for gender detection assessed via the logistic regression analysis. Results are shown for the GraphCorr-augmented SAGE model on the HCP dataset with the Schaefer atlas. ROIs with the top 2% saliency scores are marked. Red color indicates ROIs whose BOLD signals are elevated in female subjects, whereas blue color indicates ROIs whose BOLD signals are elevated in male subjects. \begin{table} \begin{tabular}{c c c c c} \hline \hline Node & Lag Filter & Windowing & Accuracy (\%) & ROC (\%) \\ Embedder & & & & \\ \hline ✗ & ✗ & ✗ & \(75.20\pm 2.83\) & \(85.29\pm 1.67\) \\ ✓ & ✗ & ✗ & \(80.51\pm 1.92\) & \(88.49\pm 1.54\) \\ ✓ & ✓ & ✗ & \(81.16\pm 1.69\) & \(88.66\pm 2.07\) \\ ✓ & ✓ & ✓ & \(\textbf{89.57}\pm 0.68\) & \(\textbf{94.27}\pm 1.98\) \\ \hline \hline \end{tabular} \end{table} Table 3: Performance for ablated variants of GraphCorr on the HCP dataset with the Schaefer atlas. Results are listed as mean\(\pm\)std across test folds for the downstream SAGE model. Boldface indicates top-performing variant. nodes. This structure is kept fixed during subsequent training procedures. To improve performance, an adaptive structure can be used instead where the edge weights are taken as learnable parameters. Here each individual subject's fMRI scans were aligned to an anatomical template, and brain regions were then defined with guidance from a brain atlas. The mean BOLD signals in each ROI were then processed in downstream models. Benefits of this approach include computational efficiency due to relatively lower model complexity, and consistency in region definitions across subjects [85]. Meanwhile, information losses naturally occur during registration of individual-subject fMRI data onto a standardized template. To alleviate these losses, ROI definitions in the template space could instead be backprojected onto the brain spaces of individual subjects. This way ROI definitions can be performed while leaving fMRI data in its original space [51]. ## 7 Conclusion In this study, we introduced a novel plug-in graph neural network to improve the performance of downstream models for fMRI classification. The proposed GraphCorr method employs node embedder and lag filter modules to sensitively extract dynamic and lagged functional connectivity features from whole-brain fMRI time series. As such, it transforms raw BOLD signals into a graph representation where neighboring nodes are taken as brain regions with correlated signals and node features are extracted via message passing on connectivity features from the two modules. This procedure restores the fine-grained temporal information that can otherwise be diminished in conventional functional connectivity features. As augmenting downstream classification models with GraphCorr significantly improves their performance and interpretability, GraphCorr holds great promise for analysis of fMRI time series. ## Acknowledgments This study was supported in part by a TUBITAK BIDEB scholarship awarded to H.A. Bedel, by TUBA GEBIP 2015 fellowship, and TUBITAK 121N029 grant awarded to T. Cukur.
2306.01958
A Survey on Explainability of Graph Neural Networks
Graph neural networks (GNNs) are powerful graph-based deep-learning models that have gained significant attention and demonstrated remarkable performance in various domains, including natural language processing, drug discovery, and recommendation systems. However, combining feature information and combinatorial graph structures has led to complex non-linear GNN models. Consequently, this has increased the challenges of understanding the workings of GNNs and the underlying reasons behind their predictions. To address this, numerous explainability methods have been proposed to shed light on the inner mechanism of the GNNs. Explainable GNNs improve their security and enhance trust in their recommendations. This survey aims to provide a comprehensive overview of the existing explainability techniques for GNNs. We create a novel taxonomy and hierarchy to categorize these methods based on their objective and methodology. We also discuss the strengths, limitations, and application scenarios of each category. Furthermore, we highlight the key evaluation metrics and datasets commonly used to assess the explainability of GNNs. This survey aims to assist researchers and practitioners in understanding the existing landscape of explainability methods, identifying gaps, and fostering further advancements in interpretable graph-based machine learning.
Jaykumar Kakkad, Jaspal Jannu, Kartik Sharma, Charu Aggarwal, Sourav Medya
2023-06-02T23:36:49Z
http://arxiv.org/abs/2306.01958v1
# A Survey on Explainability of Graph Neural Networks ###### Abstract Graph neural networks (GNNs) are powerful graph-based deep-learning models that have gained significant attention and demonstrated remarkable performance in various domains, including natural language processing, drug discovery, and recommendation systems. However, combining feature information and combinatorial graph structures has led to complex non-linear GNN models. Consequently, this has increased the challenges of understanding the workings of GNNs and the underlying reasons behind their predictions. To address this, numerous explainability methods have been proposed to shed light on the inner mechanism of the GNNs. Explainable GNNs improve their security and enhance trust in their recommendations. This survey aims to provide a comprehensive overview of the existing explainability techniques for GNNs. We create a novel taxonomy and hierarchy to categorize these methods based on their objective and methodology. We also discuss the strengths, limitations, and application scenarios of each category. Furthermore, we highlight the key evaluation metrics and datasets commonly used to assess the explainability of GNNs. This survey aims to assist researchers and practitioners in understanding the existing landscape of explainability methods, identifying gaps, and fostering further advancements in interpretable graph-based machine learning. ## 1 Introduction Recent years have seen a tremendous rise in the use of Graph Neural Networks (GNNs) for real-world applications, ranging from healthcare [129; 130], drug design [111; 52; 91], recommender systems [9], and fraud detection [79]. Predictions made in these domains have a substantial impact and therefore require to be highly trustworthy. In the realm of deep learning, one effective approach to enhance trust in these predictions is to provide an explanation supporting them [87]. These explanations elucidate the model's predictions for human understanding and can be generated through various methods. For instance, they may involve identifying important substructures within the input data [56; 82; 122], providing additional examples from the training data [12], or constructing counterfactual examples by perturbing the input to produce a different prediction outcome [55; 93; 9]. The interpretability of deep learning models is influenced by the characteristics of the input domain as the content and the complexity of explanations can vary depending on the inputs. When it comes to explaining predictions made by graph neural networks (GNNs), several challenges arise. First, since graphs are combinatorial data structures, finding important substructures by evaluating different combinations that maximize a certain prediction becomes difficult. Second, attributed graphs contain both node attributes and edge connectivity, which can influence the predictions and they should be considered together in explanations. Third, explanations must be adaptable to different existing GNN architectures. Lastly, explanations for the local tasks (e.g., node or edge level) may differ from those for global tasks (e.g., graph level). Due to these challenges, explaining graph neural networks is non-trivial and a large variety of methods have been proposed in the literature to tackle it [115; 69; 107; 77; 5; 97; 56; 119; 93; 4; 73]. With the increasing use of GNNs in critical applications such as healthcare and recommender systems, and the consequent rise in their explainability methods, we provide an updated survey of the explainability of GNNs. Additionally, we propose a novel taxonomy that categorizes the explainability methods for GNNs, providing a comprehensive overview of the field. Existing surveys on GNN explainability predominantly focus on either factual methods [120; 13; 47] or counterfactual methods [26] but not both. These surveys thus lack a comprehensive overview of the different methods in the literature by limiting the discussion to specific methods [120; 47] or only discussing them under the broad umbrella of trustworthy GNNs [13; 104]. Our survey aims to bridge this gap by providing a comprehensive and detailed summary of existing explainability methods for GNNs. We include both factual as well as counterfactual methods of explainability in GNNs. To enhance clarity and organization, we introduce a novel taxonomy to categorize these methods for a more systematic understanding of their nuances and characteristics. ### Graph Neural Networks Graph neural networks (GNNs) have been used to learn powerful representations of graphs [42; 96; 28]. Consider a graph \(G\) given by \(G=(V,E)\), where \(V\) denotes the set of \(n\) nodes and \(E\) denotes the set of \(m\) edges. We can create an adjacency matrix \(\mathbf{A}\in[0,1]^{n\times n}\) such that \(A_{ij}=1\) if \((i,j)\in E\) and \(0\) otherwise. Each node may have attributes, given by the matrix \(\mathbf{X}\in\mathbb{R}^{n\times F}\) such that each row \(i\) stores the \(F\)-dimensional attribute vector for node \(i\). A GNN model \(\mathcal{M}\) embeds each node \(v\in V\) into a low-dimensional space \(\mathbf{Z}:\mathbb{R}^{n\times d}\) by following this message passing rule for \(k\) steps as \[\mathbf{Z}_{v}^{(k+1)}=\textsc{Update}_{\Phi}(\mathbf{Z}_{v}^{(k)},\textsc{ Agg}(\{\textsc{Msg}_{\Theta}(\mathbf{Z}_{v}^{(k)},\mathbf{Z}_{u}^{(k)}):(u,v)\in E \})), \tag{1}\] such that \(\mathbf{Z}^{0}=\mathbf{X}\) and \(\mathbf{Z}:=\mathbf{Z}^{(k)}\). Different instances of the update \(\textsc{Update}_{\Phi}\), aggregation \(\textsc{Agg}_{\Phi}\) and message generator \(\textsc{Msg}_{\Theta}\) functions give rise to different GNN architectures. For example, GCN [42] has an identity message, a mean aggregation, and weighted update functions, while GAT [96] learns an attention-based message generation instead. These embeddings are trained for a specific task \(\mathcal{T}\) that can be either supervised (e.g., node classification, graph classification, etc.) or unsupervised (e.g., self-supervised link prediction, clustering, etc.). ### Explainability in ML **Problem 1** (Explainability [8]): _Consider a supervised task \(\mathcal{T}\) with the aim of learning a mapping from \(\mathcal{X}\) to \(\mathcal{Y}\), and a model \(\mathcal{M}\) trained for this task. Given a set of \((\mathbf{x},y)\) pairs \(\subseteq(\mathcal{X},\mathcal{Y})\) and the model \(\mathcal{M}\), generate an explanation \(\mathbf{e}\) from a given set \(\mathcal{D}_{E}\) such that \(\mathbf{e}\)"explains" the prediction \(\hat{y}=\mathcal{M}(\mathbf{x})\)._ These explanations can be either _local_ to a single test input \((\mathbf{x},y)\) or _global_ when they explain prediction over a specific dataset \(\mathcal{D}^{\prime}\subseteq(\mathcal{X},\mathcal{Y})\). Further, the explanation can be generated either _post-hoc_ (_i.e._, after the model training) or _ante-hoc_ where the model itself is _self-interpretable_, _i.e._, it explains its predictions. With some exceptions, post-hoc explanations usually consider a black-box access to the model while self-interpretable methods update the model architecture and/or training itself. We can further differentiate the explanation methods based on their content, _i.e._, the explanation set \(\mathcal{D}_{E}\). _Local explanations_ only consider the local neighborhood of the given data instance while _global explanations_ are concerned about the model's overall behavior and thus, searches for patterns in the model's predictions. On the other hand, explanations can also be _counterfactual_, where the aim is to explain a prediction by providing a contrasting example that changes it. Overview With the widespread adoption of GNNs across various applications, the demand for explaining their predictions has grown substantially. Moreover the GNN-based models are becoming more complex [62]. Recently, the community has witnessed a surge in efforts dedicated to the explainability of GNNs. These methods exhibit variations in terms of explanation types, utilization of model information, and training procedures, among other factors. We organize and categorize these methods to develop a deeper understanding of the existing works and provide a broad picture of their applicability in different scenarios. **Main Schema: Factual and Counterfactual Methods.** Figure 1 provides an overview of the broad categorization of the existing works. Based on the type of explanations, we first make two broad categories: (1) Factual and (2) Counterfactual. Factual methods aim to find an explanation in the form of input features with the maximum influence over the prediction. These explanations can be a set of either node features or a substructure (set of nodes/edges) or both. On the other hand, counterfactual methods provide an explanation by finding the smallest change in the input graph that changes the model's prediction. Hence, counterfactual explanations can be used to find a set of similar features that can alter the prediction of the model. **Organization.** In the following sections, we describe each category in detail and provide summary of various explainability methods in each category. In Sec. 3, we describe the factual approaches which are further classified into self-interpretable and post-hoc categories. In Sec. 4, the counterfactual methods are categorized into perturbation-based, neural network-based and search-based methods. Sec. 5 presents three special categories of explainers such as temporal, global and causality-based. In Sec. 6, we overview the explainer methods that are relevant for specific applications in different domains such as in social networks, biology, and computer security. Lastly, we review widely used datasets in Sec. 7 and evaluation metrics in Sec. 8. ## 3 Factual We classify the factual explainer methods broadly into two categories based on the nature of the integration of the explainability architecture with the main model as follows. Figure 1: **Overview of the Schema. (1) Factual. Information constraints: GIB [118], VGIB [116], GSAT [69], LRI [70]; Structural Constraints: DIR [107], ProtGNN [125], SEGNN [12], KER-GNN [21]; Decomposition: CAM [77], Excitation-BP [77], DEGREE [22], GNN-LRP [84]; Gradient-based: SA [5], Guided-BP [5], GradCAM [77]; Surrogate: PGM-Ex [56], GraphLine [34], GraphSVX [17], ReLex [124], DnX [75]; Perturbation-based: GNNExplainer[115], GraphMask [82], PGExplainer [56], Refine [100], ZORRO [23], SubgraphX [122], Gstarx [123]; Generation: XGNN [119], RGExplainer [85], GNNInterpreter [101], GPlowExplainer [46], GEM [49]; **(2) Counterfactual.** Search-based: MMACE [102], MEG [73]; Neural Network-based: RCExplainer [4], CLEAR [57]; Perturbation-based: GREASE [9], CF2 [93], CF-GNNexplainer [55] * **Post-hoc:** Post-hoc methods do not have the explainable architecture inbuilt into the model to attribute a model's prediction to the input. As seen in Figure 1(a), the explainability architecture (EA) is separated from the model which is pre-trained with fixed weights. For any instance \(G\), post-hoc methods generate an explanation using the model's input \(\mathcal{D}\), output \(Y\) and sometimes even internal parameters of the model. Note that different EAs use different inputs \(\mathcal{D}\) that are fed to the model. Post-hoc methods might not be always accurate as they may end up extracting features that are spuriously correlated with the task [125; 81; 69; 45]. * **Self-interpretable:** Contrary to post-hoc methods, self-interpretable explainable methods design explainability architecture directly inside the model. As seen in Figure 1(a), these methods usually have two modules. The subgraph extraction module (the function \(g\)) uses constraints to find an informative subgraph \(G_{s}\) from the input graph \(G\). Then, the prediction module \(f\) uses \(G_{s}\) to predict label \(Y\). \(G_{s}\) also acts as an explanation. Both modules are trained together with an objective \(L(f\circ g(G),Y)\) to minimize the loss between prediction \(f\circ g(G)\) and label \(Y\). One major drawback of self-interpretable models is that the good interpretability of the models is often at the cost of the prediction accuracy [69]. ### Post-hoc We divide the post-hoc methods based on their approaches used to find explanation into the following categories: a) Decomposition-based methods (Sec. 3.1.1), b) Gradient-based methods (Sec. 3.1.2), c) Surrogate methods (3.1.3), d) Perturbation-based methods (Sec. 3.1.4), e) Generation-based methods (3.1.5). The post-hoc methods can also be categorized based on their requirement to access the internal parameters of the model. As seen in figure 1(b), this division results into the following categories of the methods: **white-box** and **black-box**. **White-box**: These methods require access to internal model parameters or embeddings to provide explanations. For instance, all decomposition-based methods (Sec. 3.1.1) require model parameters such as node weights of each layer to compute an importance score of different parts of the input. Even gradient based methods (Sec. 3.1.2) require access to the gradients. Thus, all methods in these categories are considered as white-box methods and they are not suitable in cases where a model's internal parameters are inaccessible. **Black-box**: Contrary to the white-box methods, black-box methods do not require access to the model's internal parameters. For instance, all approaches in the category of surrogate methods (Sec. Figure 2: (a) **Self-interpretable and post-hoc architectures :** In self-interpretable methods, the subgraph extraction module \(g\) uses constraints to find an informative subgraph \(G_{s}\) from the input graph \(G\). The prediction module \(f\) uses this \(G_{s}\) to predict the label \(Y\). In contrast, Post-hoc methods consider model as pre-trained with fixed weights. For any instance \(G\), post-hoc methods generate explanation using model’s input \(\mathcal{D}\), output \(Y\) and in some cases the model’s internal parameters. (b) **White-box and Black-box post-hoc methods:** Methods are shown in the individual categories. Decomposition-based: CAM [77], Excitation-BP [77], DEGREE [22], GNN-LRP [84]; Gradient-based: SA [5], Guided-BP [5], Grad-CAM [77]; Surrogate: PGM-Ex [56], GraphLime [34], GraphSVX [17], ReLex [124], DnX [75]; Perturbation-based: GNNExplainer [115], GraphMask [82], PGExplainer [56], ReFine [100], ZORRO [23], SubgraphX [122], Gstarx [123]; Generation-based: XGNN [119], RGExplainer [85], GNNInterpreter [101], GFlowExplainer [46], GEM [49]. 3.1.3) generate a local dataset using the model's input and output. Since these methods do not require access to the model parameters, all of them can be categorized as black-box methods. #### 3.1.1 Decomposition-based Methods These methods consider the prediction of the model as a score that is decomposed and distributed backwards in a layer by layer fashion till it reaches the input. The score of different parts of the input can be construed as its importance to the prediction. However, the decomposition technique can vary across methods. They also require internal parameters of the model to calculate the score. Hence, these explanation methods are considered as white-box methods. Table 1 provides a summary of these methods. One of the decomposition-based methods, **CAM**[77] aims at constructing the explanation of GNNs that have a Global Average Pooling (GAP) layer and a fully connected layer as the final classifier. Let \(e_{n}\) be the final embedding of node \(n\) just before the GAP layer and \(w^{c}\) be the weight vector of the classifier for the class \(C\). The importance score of the node \(n\) is computed as \((w^{c})^{T}e_{n}\). This means that the node's contribution to the class score \(y^{c}\) is taken as the importance. It is clear that this method is restricted to GNNs that have a GAP layer and perform only the graph classification task. Another method, **Excitation-BP**[77] considers that the final probability of the prediction can be decomposed into excitations from different neurons. The output of a neuron can be intuitively understood as the weighted sum of excitations from the connected neurons in the previous layer combined with a non-linear function where the weights are the usual neural network parameters. With this, the output probability can be distributed to the neurons in the previous layer according to the ratios of these weights. Finally, the importance of a node is obtained by combining the excitations of all the feature maps of that node. Contrary to other methods, **DEGREE**[22] finds explanation in the form of subgraph structures. First, it decomposes the message passing feed-forward propagation mechanism of the GNN to find a contribution score of a group of target nodes. Next, it uses an agglomeration algorithm that greedily finds the most influential subgraph as the explanation. **GNN-LRP**[84] is based on the concept that the function modeled by GNN is a polynomial function in the vicinity of a specific input. The prediction score is decomposed by approximating the higher order Taylor expansion using layer-wise relevance propagation [3]. This differs from other decomposition-based methods not only in the decomposition technique but also in the score attribution. While other methods attribute scores to nodes or edges, GNN-LRP attributes scores to walks i.e., a collection of edges. #### 3.1.2 Gradient-based Methods The gradient-based explainer methods follow the following key idea. Gradients represent the rate of change, and the gradient of the prediction with respect to the input represents how sensitive the prediction is to the input. This sensitivity is seen as a measure of importance. We provide a summary of these methods in Table 2. **Sensitivity Analysis (SA)**[5] is one of the earlier methods to use gradients to explain GNNs. Let \(x\) be an input, which can be a node or an edge feature vector, \(SA(x)\) be its importance, and \(\phi\) be the GNN model, then the importance is computed as \(SA(x)\propto||\nabla_{x}\phi(x)||^{2}\). The intuition behind this method is based on the aforementioned sensitivity to the input. **Guided Backpropagation (Guided-BP)**[5], a slightly modified version of the previous method SA, follows a similar idea except the fact that the negative gradients are clipped to zero during the backpropagation. This is done to preserve only inputs that have an excitatory effect on the output. Intuitively, since positive and negative gradients have opposing effect on the output, using both of them could result in less accurate explanations. \begin{table} \begin{tabular}{c c c c} \hline \hline **Method** & **Parameters** & **Form of Explanation** & **Task** \\ \hline CAM [77] & Node embedding of last layer and MLP weights & Node importance & Graph Classification \\ \hline Excitation-BP [77] & Weights of all GNN layers & Node importance & Node and Graph Classification \\ \hline DEGREE [22] & Decompose messages and & Similar nodes & Node and Graph Classification \\ \hline GNN-LRP [84] & Weights of all layers & Collection of edges & Node and Graph Classification \\ \hline \hline \end{tabular} \end{table} Table 1: Key highlights of _decomposition-based_ methods The method **Grad-CAM**[77] builds upon **CAM**[77] (see Section 3.1.1), and uses gradients with respect to the final node embeddings to compute the importance scores. The importance score is \((\frac{1}{N}\sum_{n=1}^{N}\nabla_{e_{n}}(y^{c}))^{T}e_{n}\), where \(e_{n}\) is the final embedding of node \(n\) just before the GAP layer, and \(w^{c}\) is the weight vector of the classifier for class \(C\), \(g=\frac{1}{N}\sum_{n=1}^{N}e_{n}\) is the vector after the GAP layer, and the final class score \(y^{c}\) of \(C\) is \(w^{T}g\). This removes the restriction about the necessity of GAP layer. This equation shows that the importance of each node is computed as the weighted sum of the feature maps of the node embeddings, where the weights are the gradients of the output with respect to the feature maps. All the above methods depend on this particular intuition that the gradients can be good indicators of importance. However, this might not be useful in many settings. Gradients indicates sensitivity which does not reflect importance accurately. Moreover, saturation regions where the prediction of the model does not change significantly with the input, can be seen as another issue in SA and Guided-BP. #### 3.1.3 Surrogate Methods Within a large range of input values, the relationship between input and output can be complex. Hence, we need complex functions to model this relationship and the corresponding model might not be interpretable. However, in a smaller range of input values, the relationship between input and output can be approximated by simpler and interpretable functions. This intuition leads to _surrogate methods_ that fit a simple and interpretable surrogate model in the locality of the prediction. Table 3 shows different locality-based data extraction techniques and surrogate models used by surrogate methods. This surrogate model can then be used to generate explanations. As seen in the figure 2(a), these methods adopt a two-step approach. Given an instance \(G\), they first generate data from the prediction's neighborhood by utilizing multiple inputs \(D\) within the vicinity and recording the model's prediction \(Y\). Subsequently, a surrogate model is employed to train on this data. The explanation \(E\) provided by the surrogate model serves as an explanation for the original prediction. **PGMExplainer** constructs a Bayesian network to explain the prediction. First, it creates a tabular dataset by random perturbations on node features of multiple nodes of the computational graph and records its influence on the prediction. A grow-shrink algorithm is used to select top influential nodes. Using structure learning, a Bayesian network is learnt that optimizes the Bayesian Information Criterion (BIC) scores and the DAG of conditional probabilities act as an explanation. In **GraphLime**[34], the local explanations are based on Hilbert-Schmidt Independence Criterion Lasso (HSIC Lasso) model, which is a kernel-based nonlinear interpretable feature selection algorithm. This method assumes that the node features in the original graph are easily interpretable. The HSIC model takes a node and its N-hop neighbourhood (for some \(N\)), and selects a subset of node features that are the most influential to the prediction. These selected features act as the explanation. To construct a local dataset, **GraphSVX**[17] uses a mask generator to jointly perturb the nodes and the features and observes its effects on the predictions. The mask generator isolates the masked nodes and replaces masked features by its expected values. It then fits a weighted linear regression model (WLR) on the local dataset. The coefficients of WLR act as explanations. The next two approaches use GNN based models to act as surrogate models. **RelEx**[124] uses a BFS-based sampling strategy to select nodes and then perturb them to create the local dataset. Then, a GCN model with residual connections is used to fit this dataset. In contrast to other methods in this category, the surrogate model of RelEx is not interpretable. Hence, it uses perturbation-based strategy to find a mask that acts as explanation. We note that the surrogate model is more complex compared to other methods and it requires the use of another explanation method to derive explanations from the surrogate model. **DistilInExplain (DnX)**[75] first learns a surrogate GNN \begin{table} \begin{tabular}{c c c c c} \hline \hline **Method** & **Explanation Type** & **Task** & **Explanation Target** & **Datasets Evaluated** \\ \hline SA [5] & Instance level & \begin{tabular}{c} Graph classification \\ Node classification \\ \end{tabular} & \begin{tabular}{c} Nodes, Node features \\ Edges, Edge features \\ \end{tabular} & Infection, ESOL [15] \\ \hline Guided-BP [5] & Instance level & \begin{tabular}{c} Graph classification \\ Node classification \\ \end{tabular} & \begin{tabular}{c} Nodes, Node features \\ Edges, Edge features \\ \end{tabular} & Infection, ESOL [15] \\ \hline Grad-CAM [77] & Instance level & Node classification & Nodes, Node features & BBBP, BACE, TOX21 [40] \\ \hline \hline \end{tabular} \end{table} Table 2: Key highlights of _gradient-based_ methods via knowledge distillation and then provides an explanation by solving a simple convex program. In contrast to RelEx, DnX uses a simpler surrogate model which is a linear architecture termed as Simplified Graph convolution (SGC) [105]. SGC does not have any non-linear activation layers and uses a single parameter matrix across layers. The parameters in SGC are learned via knowledge distillation with an objective to minimize the KL-divergence between the predictions by SGC and the model. Furthermore, explanations can be derived from SGC by solving a simple convex program. #### 3.1.4 Perturbation-based Methods These methods find _important subgraphs_ as explanations by perturbing the input. Fig. 2(b) presents two key modules of these methods: the subgraph extraction module and the scoring function module. For an input \(G\), the subgraph extraction module extracts a subgraph \(G_{s}\). The model predictions \(Y_{s}\) for subgraphs are scored against the actual predictions \(Y\) using a scoring function. The feedback from the scoring function can be used to train the subgraph extraction module. In some cases, model parameters are also used as the training input for the subgraph extraction module. These methods provide explanations \(E\) in the form of a subgraph structure and some also provide node features as explanations. Table 4 presents a summary of these methods. **GNNExplainer**[115] is one of the initial efforts towards the explainability of GNNs. It identifies an explanation in the form of a Subgraph including a subset of node features that have the maximum influence on the prediction. It learns continuous masks for both adjacency matrix and features by optimizing cross entropy between the class label and model prediction on the masked subgraph. In a follow-up work, **PGExplainer**[56] extends the idea in GNNExplainer by assuming the graph to be a random Gilbert graph, where the probability distribution of edges is conditionally independent. \begin{table} \begin{tabular}{c c c c} \hline \hline **Method** & **Local Dataset extraction** & **Surrogate model** & **Explanation** \\ \hline GraphLime [34] & N-hop neighbor nodes & HSIC Lasso & Weights of the model \\ \hline PGMExplainer [97] & Random node & Bayesian network & DAG of conditional dependence \\ \hline Relex [124] & Random sampling & GCN & Perturbation-based method \\ \hline DistilinExplain [75] & Entire dataset & Knowledge distilled Simple GCN [105] & Convex programming, decomposition \\ \hline GraphSVX [17] & Input perturbations & Weighted Linear Regression (WLR) & Weights of WLR \\ \hline \hline \end{tabular} \end{table} Table 3: Key highlights of _surrogate methods_ Figure 3: **a) Surrogate:** These methods follow a two-step process. For any instance \(G\), they generate data from the neighbourhood of the prediction by using multiple inputs \(D\) in the locality and recording its model prediction \(Y\). Then a surrogate model is used to fit this data. Explanation \(E\) for the surrogate model is the explanation for the prediction, **b) Perturbation-based:** They have two key modules: a subgraph extraction architecture and a scoring function. For an input \(G\), the subgraph extraction module extracts a subgraph \(G_{s}\). The model prediction \(Y_{s}\) for subgraph \(G_{s}\) are scored against the actual predictions \(Y\) using a scoring function. The feedback from the scoring function can be used to train the subgraph extraction module. Sometimes model parameters are also used as the training input to the subgraph extraction module. The optimal subgraph \(G_{s}^{*}\) acts as the final explanation \(E\). The distribution of each edge is independently modeled as a Bernoulli distribution, i.e., each edge has a different parametric distribution. These parameters are modeled by a neural network (MLP), and the parameters of this MLP is computed by optimizing the mutual information between the explanation subgraph and the predictions of the underlying GNN model. Another masking-related method, **GraphMask**[82] provides an explanation by learning a parameterized edge mask that predicts the edge to drop at every layer. A single-layer MLP classifier is trained to predict the edges that can be dropped. To keep the topology unaffected, these edges are not dropped but are replaced by a learned baseline vector. The training objective is to minimize the \(L_{0}\) norm i.e., the total number of edges not masked, such that the prediction output remains within a tolerance level. To make the objective differentiable, it uses sparse relaxations through the reparameterization trick and the hard concrete distribution [59, 36]. Another approach **Zorro**[23] finds explanations in the form of important nodes and features that maximizes _Fidelity_ (see Sec. 8). It uses a greedy approach that selects the node and the feature at each step with the highest fidelity score. Fidelity is computed as the expected validity of the perturbed input. The approach uses a discrete mask for selecting a subgraph without any backpropagation. A two-staged approach, **ReFine**[100] consists of the edge attribution or pre-training and the edge selection or fine-tuning steps. During pre-training, a GNN and an MLP are trained to find the edge probabilities for the entire class by maximizing mutual information and contrastive loss between classes. During the fine-tuning step, the edge probabilities from the previous stage are used to sample edges and find an explanation that maximizes mutual information for a specific instance. The next two approaches use cooperative game theoretic techniques. **SubgraphX**[122] applies the Monte Carlo Tree search technique for subgraph exploration and uses the Shapley value [86] to measure the importance of the subgraphs. For the search algorithm, the child nodes are obtained by pruning the parent graph. In computing the Shapley values, the Monte Carlo sampling helps to find a coalition set and the prediction from the GNN is used as the pay-off in the game. In a subsequent work, **GStarX**[123] uses a different technique from cooperative game theory known as HN value [27], to compute importance scores of a node for both graph and node classification tasks. In contrast to the Shapley value, the HN value is a structure-aware metric. Since computing the HN values is expensive, Monte Carlo sampling is used for large graphs. The nodes with the top-k highest HN values act as an explanation. #### 3.1.5 Generation-based methods Generation-based approaches either use generative models or graph generators to derive instance-level or model-level explanations. Furthermore, to ensure the validity of the generated graphs, different approaches have been proposed. Table 5 provides a summary of the generation-based methods. **XGNN**[119] provides model-level explanations by generating key subgraph patterns to maximize prediction for a certain class. The subgraph is generated using a Reinforce \begin{table} \begin{tabular}{c c c c c} \hline \hline **Method** & **Subgraph Extraction Strategy** & **Scoring function** & **Constraints** & **Explanation** \\ \hline GNNExplainer [115] & Continuous relaxation & MI & Size & Yes \\ SubgraphX [122] & Monte Carlo Tree Search & SV & Size, connectivity & No \\ GraphMask [82] & Layer-wise parameterized edge selection & \(L_{0}\) norm & Prediction divergence & No \\ PGexplinear [56] & Parameterized edge selection & MI & Size and/or connectivity & No \\ Zorro [23] & Greedy selection & Fidelity & Threshold fidelity & Yes \\ ReFine [100] & Parameterized edge attribution & MI & Number of edges & No \\ GstarX [123] & Monte Carlo sampling & HN-value & Size & No \\ \hline \hline \end{tabular} \end{table} Table 4: Key highlights of the _perturbation-based_ methods. Note that MI is mutula information, SV is Shapley value, and Explanation denotes node feature explanation. \begin{table} \begin{tabular}{c c c c c} \hline \hline **Methods** & **Explanation Type** & **Optimization** & **Constraints** & **Task** \\ \hline XGNN [119] & Model level & RL-policy gradient & Domain specific rules & Graph classification \\ Ro-Explainer [85] & Instance level & RL-policy gradient & Size, radius, similarity & Node \& Graph classification \\ GFLOW Explainer [46] & Instance level & TD flow matching & Connectivity/ cut vertex & Node \& Graph classification \\ GNNinterpreter [101] & Model level & Continuous relaxation & Similarity to mean & Graph Classification \\ GEM [49] & Instance level & Autoencoder & Graph validity rules & Node \& Graph Classification \\ \hline \hline \end{tabular} \end{table} Table 5: Key highlights of the _generation-based_ methods graph generator which is optimized using policy gradient. In the setup for the RL agent, the previous graph is the state; adding an edge is an action; and the model prediction along with the validity rules acts as the reward. Unsurprisingly, the validity rules are specified based on domain knowledge. Another RL-based method, **RG-Explainer**[85] formulates the underlying problem as combinatorial optimization instead of using continuous relaxation or search methods to find the subgraph. A starting point is selected using an MLP which acts as an input to the graph generator. The graph generator is an RL agent that optimizes for the policy using policy gradient with subgraph as the state, adding neighboring nodes as the action, and the function of the cross entropy loss as the reward. A non-RL method, **GNNinterpreter**[101] is a generative model-level explanation method for the graph classification task. Its objective is to maximize the likelihood of predicting the explanation graph correctly for a given class. The similarity between the explanation graph embedding and the mean embedding of all graphs act as an optimization constraint. Intuitively, this ensures that the explanation graph stays closer to the domain and is meaningful. Since the adjacency matrix and sometimes even the features can be categorical, GNNinterpreter uses the Grumbel softmax method [36] to enable backpropagation of gradients. Contrary to XGNN with domain-specific hand-crafted rules, GNNinterpreter uses numerical optimization and does not need any domain knowledge. **GFLOW Explainer**[46] uses GFLOWNETs as the generative component. The objective is to construct a TD-like flow matching condition [6] to learn a policy to generate a subgraph by sequentially adding neighbors (nodes) such that the probability of the subgraph of a class is proportional to the mutual information between the label and the distribution of possible subgraphs. A _state_ consists of several nodes with the initial state as the single most influential node and the end state that satisfies the stopping criteria. _Action_ is adding a node and the _reward_ is a function of the cross-entropy loss. **GEM**[49] uses the principles of Granger causality to generate ground-truth explanations which are used to train the explainer. It quantifies the causal contribution of each edge in the computational graph by the difference in the loss of the model with and without the edge. This distilled ground-truth for the computation graph is used to train the generative auto-encoder based explainer. This explainer provides an explanation for any instance in the form of the subgraph of the computation graph. ### Self-interpretable In self-interpretable methods, the explainable procedure is intrinsic to the model. Such methods derive explainability by incorporating interpretability constraints. These methods use either information constraints or cardinality (structural) constraints to derive an informative subgraph which is used for both the prediction and the explanation. Based on the design of the explainability, we further classify the self-interpretable methods into two types based on the imposed constraints (Fig. 4). #### 3.2.1 Methods with information constraints One of the major challenges in constructing explanations via subgraphs is that the critical subgraphs may have different sizes and can be irregular. Thus, constraining the size of the explanation may not be appropriate for the underlying prediction task. To address this challenge, the methods based on information constraint use the principle of information bottleneck (IB) [94] to impose constraints on the information instead of the size. For a graph \(G\), subgraph \(G_{s}\) and label \(Y\), the graph information bottleneck (GIB) objective is: \[\max_{G_{s}}I(Y,G_{s})\ \ \text{such that}\ \ I(G,G_{s})\leq\gamma\] where \(I\) denotes the mutual information. Using Lagrangian multiplier \(\beta\), we can write the equation as: \[\min_{G_{s}}-I(Y,G_{s})\ +\ \beta*I(G,G_{s})\] As seen from the equations, GIB objective-based methods have two parts in the objective function and both are intractable. All methods approximate \(I(Y,G_{s})\) by calculating the cross-entropy loss. However, all methods vary in their approach in making \(I(G,G_{s})\) tractable i.e., all have different approaches to compressing the graph and finding the informative subgraph \(G_{s}\). This subgraph is used for both prediction and interpretation. Table 6 provides the summary of all methods in this category. **GSAT**[69] uses a stochastic attention mechanism to calculate the variational upper bound for \(I(G,G_{s})\). First, it encodes graph \(G\) using a GNN to find the representation for each node. Then, for each node pair \((u,v)\), GSAT uses an MLP to calculate \(P_{uv}\). This is used to sample stochastic attention from Bernoulli distribution \(Bern(P_{uv})\) to extract a subgraph \(G_{s}\). The variational upper bound is the KL divergence between \(Bern(P_{uv})\) and \(Bern(\alpha)\) where \(\alpha\) is a hyper-parameter. Building on similar concepts, **LRI**[70] uses both Bernoulli and Gaussian distribution as the prior distribution. LRI-Bernoulli provides the existence importance of points and LRI-Gaussian provides the location importance of the points i.e., how perturbing the location of the point in different directions affects the prediction. Another method, **GIB**[118] assumes that there is no reasonable prior distribution to solve \(I(G,G_{s})\) via KL divergence in the graph space. Hence, it uses the Donsker-Vardhan KL representation [16] in the latent space. It employs a bi-level optimization wherein the statistic network of the Donsker-Vardhan representation is used to estimate \(I(G,G_{s})\) in the inner loop. This estimate with classification and connectivity loss is used to optimize the GIB objective in the outer loop. This bi-level training process is inefficient and unstable; and hence **VGB**[116] uses a different compression technique. The information in the original graph is dampened by injecting noise into the node representations via a learned probability \(P_{i}\) for each node \(i\). The classification loss will be higher if the informative substructure \(G_{s}^{*}\) is injected with noise. Hence, \(G_{s}^{*}\) is less likely to be injected with noise compared to label-irrelevant substructures. #### 3.2.2 Methods with structural constraints Imposing structural constraints on the input to derive the most informative subgraph has also been a common approach. The obtained informative subgraph is used for both making predictions and generating explanations. The key difference across the methods is the set up of the structural constraints. In Table 7, we provide the key highlights of these methods. \begin{table} \begin{tabular}{c c c c} \hline \hline **Method** & **Process of calculating \(I(G,G_{s})\)** & **Injection of Randomness** & **Subgraph Extractor Architecture** \\ \hline GSAT [69] & Stochastic attention & Bernoulli as Prior for KL divergence & GNN + MLP + Reparameterization \\ \hline LRI [70] & Learnable Randomness injection & Bernoulli and Gaussian as prior & GNN + MLP + Reparameterization \\ \hline GIB [118] & Donsker-Vardhan KL representation [16] & No randomness injection & Statistic Network: GNN + MLP \\ \hline VGG [116] & Compression via Noise injection & Gaussian noise on node features & GNN + MLP + Reparameterization \\ \hline \hline \end{tabular} \end{table} Table 6: Key highlights of the methods with _information constraints_ Figure 4: **Self-interpretable methods: Every self-interpretable method has a _subgraph extraction_ and a _prediction_ module. The subgraph extraction module (the function \(g\)) uses constraints to find an informative subgraph \(G_{s}\) from input graph \(G\). The prediction module uses \(G_{s}\) to predict label \(Y\). This also shows the techniques used by each method to implement these individual modules. Self-interpretable Methods are categorized based on constraints: (1) **Information constraint:** GIB [118], VGIB [116], GSAT [69], LRI [70]; (2) Structural constraint:** DIR [107], ProtGNN [125], SEGNN [12], KER-GNN [21]. One of the earlier methods, **DIR**[107] finds explanations in the form of invariant causal rationales by learning to split the input into causal (\(C\)) and non-causal (\(S\)) parts. The objective is to minimize the classification loss such that \(Y\) (the prediction) is independent of \(S\) given \(C\). To achieve this, it first creates multiple interventional distributions by conducting interventions on the training distribution. The part that is invariant across these distributions is considered a causal part. Moreover, the implementation has three key stages. First, the architecture consists of a rationale generator (GNN) that splits the input graph into a causal part with top \(k\) edges and a non-causal part. Second, a distribution intervener, i.e., a random replacement from a set, creates perturbed distribution to infer the invariant causal parts. Finally, two classifiers are used to generate a joint prediction on causal and non-causal parts. **ProtoGNN**[125] combines prototype learning [83] with GNNs. Prototype learning is a form of case-based reasoning which makes predictions for new instances by comparing them with several learned exemplar cases also called prototypes. ProtoGNN computes the similarity scores between the graph embedding and multiple learned prototypes. Moreover, these prototypes are projected onto the nearest latent training subgraph during training using the Monte Carlo tree search [88; 7]. The similarity scores are used for the classification task where the subgraphs with high similarities can be used for explanation. In another work, for a given unlabeled node, **SEGNN**[12] finds \(k\) nearest labeled nodes that have structural and feature similarities and can be used for both generating predictions and explanations. It uses contrastive loss on node representations for feature similarity and also on edge representations of local neighborhood nodes for structural similarity. Moreover, the classification loss uses negative sampling with approximate \(k\) similar nodes. These \(k\) nearest nodes can be used to derive an explanation subgraph with threshold importance. The method, **KER-GNN**[21] integrates graph kernels into the message-passing process of GNNs to increase the expressivity of GNNs beyond the 1-WL isomorphism test. In each layer, the node embeddings are updated by computing the similarity between the node's subgraph (the node with its ego-net) and trainable filters in the form of hidden graphs. The learned graph filters can provide important structural information about the data. Moreover, the output node attributes can be used to extract important substructures. ## 4 Counterfactual Explanation Counterfactual methods provides an explanation by identifying the minimal alteration in the input graph that results in a change in the model's prediction. Recently, there have been several attempts to have explanations of graph neural networks (GNNs) via counterfactual reasoning. We classify these explainer methods that find counterfactuals into three major categories based on the type of methods: **(1) Perturbation-based, (2) Neural framework-based**, and **(3) Search-based**. We discuss the works in the individual categories below. ### Perturbation-based methods An intuitive way to generate counterfactuals for both the graph classification and the node classification task is to _alter the edges_, i.e., add or delete the edges in the graph such that it would change the prediction of the underlying GNN method. This alteration can be achieved by perturbing either \begin{table} \begin{tabular}{c c c c c} \hline \hline **Method** & **Subgraph Extraction** & **Explanation form** & **Prediction/ classification module** & **Task** \\ \hline \multirow{2}{*}{DIR [107]} & Separating pattern that is invariant & \multirow{2}{*}{Invariant rationale} & \multirow{2}{*}{\begin{tabular}{c} Separate MLP for spurious \\ and invariant parts \\ \end{tabular} } & \multirow{2}{*}{GC} \\ & across interventional distribution & & \\ \hline \multirow{2}{*}{ProtoGNN [125]} & Computes similarity between & \multirow{2}{*}{\begin{tabular}{c} Prototypes with \\ high similarity \\ \end{tabular} } & \multirow{2}{*}{\begin{tabular}{c} MLP with similarity \\ scores as input \\ \end{tabular} } & \multirow{2}{*}{GC} \\ & Graph embedding and several & & \\ & learned diverse prototypes & & \\ \hline \multirow{2}{*}{SEGNN [12]} & Finds K nodes that have similar & \multirow{2}{*}{\begin{tabular}{c} Classification via negative \\ sampling of nodes \\ \end{tabular} } & \multirow{2}{*}{NC} \\ & structure and node features & & \\ & via contrastive loss & & \\ \hline \multirow{2}{*}{KER-GNN [21]} & Kernel filters integrated in message & learned kernels and & \multirow{2}{*}{ \begin{tabular}{c} MLP with node \\ output node attributes \\ \end{tabular} } & \multirow{2}{*}{NC} \\ & & & \\ \hline \hline \end{tabular} \end{table} Table 7: Key highlights of explainability methods with _structural constraints_. Note that NC and GC denote node and graph classification respectively. the adjacency matrix or the computational graph of a node. The perturbation-based methods are summarized in Table 8. One of the initial efforts, **CF-GNNExplainer**[55] aims to perturb the computational graph by using a binary mask matrix. It uses a binary matrix (all values are 0 or 1) \(P\) and modifies the computational graph matrix as \(\tilde{A}_{v}=P\odot A_{v}\), where \(A_{v}\) is the original computational graph matrix and \(\tilde{A}_{v}\) is computational graph matrix after the perturbation. The matrix \(P\) is computed by minimizing a combination of two different loss functions: \(L_{pred}\), and \(L_{dist}\). They are combined using a hyper-parameter in the final loss (\(L\)) as \(L_{pred}+\beta L_{dist}\). The loss function, \(L_{pred}\) quantifies the accuracy of the produced counterfactual, and \(L_{dist}\) captures the distance (or similarity) between the counterfactual graph and the original graph. In follow-up work, the method **CF\({}^{2}\)**[93] extends the method in CF-GNNExplainer [55] by including a contrastive loss that jointly optimizes the quality of both the factual explanation and the counterfactual one. For an input graph, \(G\), it aims to find an optimal subgraph \(G_{s}\) where \(G_{s}\) is a good factual explanation, and \(G\backslash\tilde{G}_{s}\) is a good counterfactual. These objectives are formulated as a single optimization problem with the corresponding loss as \(L_{overall}=\alpha L_{factual}+(1-\alpha)L_{counterfactual}\), where \(\alpha\) is a hyperparameter. Another method, **GREASE**[9] follows the standard technique of using a perturbation matrix to generate a counterfactual, but with two key modifications mainly to accommodate GNNs used for recommendation systems instead of classification tasks. In the recommendation task, GNNs rank the items (nodes) by assigning them a score instead of classifying them. GREASE uses a loss function based on the scores given by the GNN before and after the perturbation. This score helps to rank the items or nodes. The second modification is the perturbation matrix, which acts as the mask, and is used to perturb the computational graph (\(l\)-hop neighborhood of the node) instead of perturbing the entire graph. Here \(l\) denotes the number of layers in the GNN. Similar to CF\({}^{2}\)[93], GREASE also optimizes counterfactual and factual explanation losses, but not jointly. In summary, all these techniques share similarities in computing the counterfactual similarity and constructing the search space. Similarity is measured by the number of edges removed from input instances and the search space is the set of all subgraphs obtained by edge deletions in the original graph. Because of the unrestricted nature of the search space, these methods might not be ideal for graphs such as molecules, where the validity of the subgraphs has valency restrictions. On the other hand, the mentioned methods differ mainly in the loss function formulations and the perturbation operations for the downstream tasks. For instance, CF-GNNExplainer [55] and GREASE [9] perform node classification and regression, they can use perturbations on the computation graph. However, CF\({}^{2}\)[93] considers both graph and node classification tasks, hence it uses perturbations on the entire graph, i.e., the adjacency matrix. ### Neural framework-based methods The approaches in this section use neural architectures to generate counterfactual graphs as opposed to the perturbation-based methods where the adjacency matrix of the input graph is minimally perturbed to generate counterfactuals. Table 9 summarizes these methods. The objective of **RCExplainer**[4] is to identify a resilient subset of edges that, when removed, alter the prediction of the remaining graph. This is accomplished by modeling the implicit decision regions using graph embeddings. Even though the counterfactual graph generated by a neural architecture is used in conjunction with the adjacency matrix of the input graph, the counterfactual itself is not generated through perturbations on the adjacency matrix. RCExplainer addresses the issue of fragility where an interpretation is fragile (or non-robust) if systematic perturbations in the input graph can lead to dramatically different interpretations without changing the label. The standard explainers aim to generate good counterfactuals by choosing the closest counterfactual to the input \begin{table} \begin{tabular}{c c c c c} \hline \hline **Method** & **Explanation Type** & **Downstream Task** & **Perturbation Target** & **Datasets Evaluated** \\ \hline CF-GNNExplainer [55] & Instance level & Node Classification & Computation graph & Tree-Cycles [115], Tree-Grids [115] \\ \hline CF\({}^{2}\)[93] & Instance level & Graph Classification & \begin{tabular}{c} **\(\mathrm{BA}\)-Shapes [115], Tree-Cycles [115]** \\ \end{tabular} \\ & & Node Classification & Original graph & \begin{tabular}{c} **\(\mathrm{Mugg}\)[14], NCI [99], CtieSear [24]** \\ \end{tabular} \\ \hline GREASE [9] & Instance level & Node Ranking & Computation graph & LastFM, Yelp \\ \hline \hline \end{tabular} \end{table} Table 8: Key highlights of _perturbation-based_ methods for counterfactuals instance and it might induce over-fitting. RCExplainer reduces this over-fitting by first clustering input graphs using polytopes, and finding good counterfactuals close to the cluster (polytope) instead of individual instances. Another method, **CLEAR**[57] generates counterfactual graphs by leveraging a graph variational autoencoder. Two major issues often seen in other explainer methods, namely, generalization and causality are addressed in this paper. Both methods use a generative neural model to find counterfactuals, but the generative model is different across the methods. While **RCExplainer**[4] uses a neural network that takes pairwise node embeddings and predict the existence of an edge between them, **CLEAR**[57] uses a variational autoencoder to generate a complete graph. This shows that while the former method cannot create nodes that are not present in the original graph, the latter can. In terms of the objective, the primary focus in **RCExplainer**[4] is the robustness of the generated counterfactual, but **CLEAR**[57] aims to generate counterfactuals that explain the underlying causality. ### Search-based methods These methods usually depend on search techniques over the counterfactual space for relevant tasks or applications (see the highlights in Table 10). For example, given an inactive molecule in a chemical reaction, the task is to find a similar but active molecule. Here, generative methods or perturbation methods might not be effective, and the perturbations might not even result in a valid molecule. In such cases, a good search technique through the space of counterfactuals could be more useful. An inherent challenge is that the search space of counterfactuals might be exponential in size. Hence, building efficient search algorithms is required. The major application is finding counterfactual examples for molecules in related tasks. The method **MMACE**[102] finds counterfactuals for molecules. In the corresponding graph classification problem, it aims to classify a molecule based on a specific property. Examples include whether a molecule will permeate blood brain barrier and molecule's solubility. The search space can be generated by a method called _Superfast Traversal, Optimization, Novelty, Exploration and Discovery (STONED)_[72]. MMACE uses this method to generate the close neighbourhood and searches with a BFS-style algorithm to find an optimal set of counterfactuals. Similarly, **MEG**[73] also aims to find a counterfactual and the search space consists of molecules. However, instead of searching the space with traditional graph search algorithms, MEG uses a reinforcement learning-based approach to navigate the search space more efficiently. The reward for finding a counterfactual is defined as the inverse of the probability that the candidate molecule found by the agent is not a counterfactual. This method is applied in a classification problem of predicting toxicity of a molecule as well as in a regression problem of predicting solubility of a molecule. Another approach **GCFExplainer**[35] uses a random walk-based method to search the counterfactual space. The objective is not to find an individual counterfactual for each input sample but to find a small set of counterfactuals that explain all or a subset of the input samples. Hence, this is a global method (see Sec. 5.2). Here the counterfactual search space is obtained by applying graph edit operations on the training data. The method uses a random walk called **Vertex Reinforced Random Walk (VRRW)**[74], which is a modified version of a Markov chain where the state transition probabilities depend on the number of previous visits to that state. Both **MMACE**[102] and **MEG**[73] are developed for GNNs that predict molecular properties while the objective of **GCFExplainer**[35] is to generate global explanations. However, the search algorithms and the generation mechanisms of the counterfactual space are quite different. For instance, MMACE employs a graph search algorithm to locate the nearest counterfactual instance. In contrast, MEG utilizes reinforcement learning, and GCFExplainer employs random walks to achieve the same. \begin{table} \begin{tabular}{c c c c c} \hline \hline **Method** & **Explanation Type** & **Downstream Task** & **Counterfactual Generator** & **Datasets Evaluated** \\ \hline RCExplainer [4] & Instance level & Graph classification & Edge prediction & Mutag [14], BA-2motifs [56], NCI [99] \\ & & Node classification & with Neural Network & Tree-Cycles [115], Tree-Grids [115] \\ & & & & BA-Shapes [56], BA-Community [115] \\ \hline CLEAR [57] & Instance level & Graph classification & Graph generation & & \\ & & Node classification & with Variational Autoencoder & Community [18], Ogg-mnihv, IMDB-M \\ \hline \hline \end{tabular} \end{table} Table 9: Key highlights of _neural framework-based_ methods for counterfactuals ## 5 Others In this section we describe the explainer methods in three special categories: **explainer for temporal GNNs, global explainers** and **causality-based explainers** in this section. ### Explainers for Temporal GNNs In temporal or dynamic graphs, the graph topology and node attributes evolve over time. For instance, in social networks the relationships can be dynamic, or in the citation networks co-authorships change over time. There has been effort towards explaining the GNN models that are specifically designed for such structures. One of the earlier explainer methods on dynamic graphs is GCN-SE [20]. GCN-SE learns the attention weights in the form of linear combination of the representations of the nodes over multiple snapshots (i.e., over time). To quantify the explanatory power of the proposed method, importance of different snapshots are evaluated via these learned weights. Another method [31] designs a two-step process. It uses static explainer such as PGM-explaine [97] to explain the underlying temporal GNN (TGNN) model (such as TGCN [126]) for each time step separately and then it aims to discover the dominant explanations from the explanations identified by the static one. DGExplainer [110] also generates explanations for dynamic GNNs by computing the relevance scores that capture the contributions of each component for a dynamic graph. More specifically, it redistributes the output activation score to the relevance of the neurons of its previous layer in the model. This process iterates until the relevance scores of the input neuron are obtained. Recently, T-GNNExplainer [109] has been proposed for temporal graph explanation where a temporal graph constituted by a sequence of temporal events. T-GNNExplainer solves the problem of finding a small set of previous events that are responsible for the model's prediction of the target event. In [51], the approach involves a smooth parameterization of the GNN predicted distributions using axiomatic attribution. These distributions are assumed to be on a low-dimensional manifold. The approach models the distributional evolution as smooth curves on the manifold and reparameterize families of curves by designing a convex optimization problem. The aim is to find a unique curve that approximates the distributional evolution and will be useful for human interpretation. The following ones also design explainers of temporal GNNs but with specific objectives or applications. [98] studies the limit of perturbation-based explanation methods. The approach constructs some specific instances of TGNNs and evaluate how reliably node-perturbation, edge-perturbation or both can reliably identify specific graph components carrying out the temporal aggregation in temporal GNNs. In [114], a novel interpretable model on temporal heterogeneous graphs has been proposed. The method constructs temporal heterogeneous graphs which represent the research interests of the target authors. After the detection task, a deep neural network has been used for the generation process of interpretation on the predicted results. This method has been applied to research interest shift detection of researchers. Another related work [29] is on explaining GraphRC which is a special type of GNN and popular because of its training efficiency. The proposed method explores the specific role played by each reservoir node (neuron) of GraphRC by using attention mechanism on the distinct temporal patterns in the reservoir nodes. ### Global Explainers Majority of the explainers provide explanation for specific instances and can be seen as _local explainers_. However, global explainers aim to explain the overall behavior of the model by finding common input patterns that explain certain predictions [119]. Global explainers provide a high-level and generic explanation compared to local explainers. However, local explainers can be more accurate compared to global explainers [119] especially for individual instances. We categorize \begin{table} \begin{tabular}{c c c c c} \hline \hline **Method** & **Explaner Type** & **Downstream Task** & **Counterfactual Similarity Metric** & **Datasets Evaluated** \\ \hline \multirow{2}{*}{MMACE [102]} & \multirow{2}{*}{Instance level} & Graph classification & \multirow{2}{*}{Tanimoto similarity} & Blood brain barrier dataset [63] \\ & & Node classification & & Solubility data [89] \\ \hline GCFExplainer [35] & Global & Graph classification & Graph edit distance & Muag [14], NCH [99] \\ \hline \multirow{2}{*}{MEG [73]} & \multirow{2}{*}{Instance level} & Graph classification & Cosine Similarity & \multirow{2}{*}{Tox21 [40], ESOL [108]} \\ & & Node classification & Tanimoto similarity & \\ \hline \hline \end{tabular} \end{table} Table 10: Key highlights of _search-based_ methods for counterfactuals global explainers into following three types. **(1) Generation-based:** These post-hoc methods use either a generator or generative modeling to find explanations. For instance, **XGNN**[119] uses a reinforcement learning (RL) based graph generator optimized using policy gradient. In contrast, **GNNInterpreter**[101] is a generative global explainer that maximizes the likelihood of explanation graph being predicted as the target class by the model (the details are in Sec. 3.1.5). **(2) Concept-based:** These methods provide concept based explanations. Concepts are small higher level units of information that can be interpreted by humans [25]. The methods differ in the approaches to find concepts. **GCExplainer**[60] adapts an image explanation framework known as automated concept based explanation (ACE) [25] to find global explanation for graphs. It finds concepts by clustering the embeddings from the last layer of the GNN. A concept and its importance are represented by a cluster and the number of nodes in it respectively. Another method **GCneuron**[113], which is inspired by Compositional Explanations of Neurons [71], finds global explanation for GNNs by finding compositional concepts aligned with neurons. A base concept is a function \(C\) on Graph \(G\) that produces a binary mask over all the input nodes \(V\). A compositional concept is logical combination of base concepts. This method uses beam search to find compositional concept that minimizes the divergence between the concept and the neuron activation. Lastly, **GLGExplainer**[2] uses local explanations from PGExplainer [56] and projects them to a set of learned prototype or concepts (similar to ProtGNN [125]) to derive a concept vector. A concept vector is vector of distances between graph explanation and each prototype. This concept vector is then used to train an Entropy based logic explainable network (E-LEN) [10] to match the prediction of the class. The logic formula from the entropy layer for each class acts as explanations. **(3) Counterfactual: Global counterfactual explainer**[35] finds a candidate set of counterfactuals using vertex re-inforced random walk. It then uses a greedy strategy to select the top \(k\) counterfactuals from the candidate set as global explanations. We explain it in more detail in Sec. 4.3. ### Causality-based Explainers Most of the GNN classifiers learn all statistical correlation between the label and input features. As a result, these may not distinguish between causal and non-causal features and may make prediction using shortcut features [90]. Shortcut features serve as confounders between the causal features and the prediction. Methods in this category attempt to reduce the confounding effect so that the model exploits causal substructure for prediction and these substructures also act as explanations. They can be categorized into the followings. **Self-interpretable methods.** Methods in this category have the explainer architecture inbuilt into the model. One of the methods, **DIR**[107] creates multiple interventional distribution by conducting intervention on the training distribution. The invariant part across these distributions is considered as the causal part (see details in Sec. 3.2.2). **CAL**[90] uses edge and node attention to estimate causal and shortcut features of the graph. Two classifiers are used to make prediction on causal and shortcut features respectively. Loss on causal features is used as classification loss. Moreover, KL divergence is used to push the prediction based on shortcut features to have uniform distribution across classes. Finally, CAL creates an intervention graph in the representation space via random additions. The loss on this intervened graph classification is considered as the causal loss. These three loss terms are used to reduce the confounding effect and find the causal substructure that acts as explanation. **DisC**[19] uses a disentangled GNN framework to separate causal and shortcut substructures. It first learns a edge mask generator that divides the input into causal and shortcut substructures. Two separate GNNs are trained to produce disentangled representation of these substructures. Finally, these representations are used to generate unbiased counterfactual samples by randomly permuting the shortcut representation with the causal representation. **Generation-based methods.** These methods use generative modeling to find explanations and are post-hoc. **GEM[49]** trains a generative auto-encoder by finding the causal contribution of each edge in the computation graph (details are in Sec. 3.1.5). While GEM focuses on the graph space, **OrphicX**[50] identifies the causal factors in the embedding space. It trains a variational graph autoencoder (VGAE) that has an encoder and a generator. The encoder outputs latent representations of causal and shortcut substructures of input. Generator uses both of these representations to generate the original graph and the causal representation to produce a causal mask on original graph. The information flow between the latent representation of the causal substructure and the prediction is maximized to train the explainer. The causal substructure also acts as an explanation. Applications We describe the explainers methods that are relevant for specific applications in different domains such as in social networks, biology, and computer security. Computer Security.This work [30] focuses on designing an explanation framework for cybersecurity applications using GNN models by identifying the important nodes, edges, and attributes that are contributing to the prediction. The applications include code vulnerability detection and smart contract vulnerability detection. Another work [33] proposes CFGExplainer for GNN oriented malware classification and identifies a subgraph of the malware control flow graph that is most important for the classification. Some other work focuses on the problem of botnet detection. The first method BD-GNNExplainer [128] extracts the explainer subgraph by reducing the loss between the classification results generated by the input subgraph and the entire input graph. The XG-BoT detector proposed in [53] detects malicious botnet nodes in botnet communication graphs. The explainer is based on the GNNExplainer and saliency map in the XG-BoT. Social Networks.A recent work [80] studies the problem of detecting fake news spreaders in social networks. The proposed method SCARLET is a user-centric model that uses a GNN with attention mechanism. The attention scores help in computing the importance of the neighbors. The findings include that a person's decision to spread false information is dependent on its perception (or trust dynamics) of neighbor's credibility. On the other hand, **GCAN**[54] uses sequence models. The aim is to find a fake tweet based on the user profile and the sequence of its retweets. The sequence models and GNNs help to learn representation of retweet propagation and representation of user interactions respectively. A co-attention mechanism is further used to learn the correlation between source tweet and retweet propagation and make prediction. In [58], a GNN model has been proposed along with the explanation of its prediction for the problem on drug abuse in social networks. Computational Biology.One of the long standing problems in neuroscience is the understanding of Brain networks, especially understanding the Regions of Interests (ROIs) and the connectivity between them. These regions and their connectivity can be modelled as a graph. A recent work on explainability, IBGNN [11] explores the explainable GNN methods to solve the task of identifying ROIs and their connectivity that are indicative of brain disorders. It uses a perturbation matrix to create an edge mask, and extracts important edges and nodes. A few more works also focus on the same task of identifying ROIs, but use different explanation techniques. In [64], the method uses a perturbation matrix with feature masks and optimizes mutual information to find the explanations. This work [127] uses Grad-CAM [77] to find important ROIs. [1] uses a search-based method to extract counterfactuals, which can serve as good candidates for important ROIs. The method uses graph edit operations to navigate from input graph to a counterfactual, but it optimizes this by using a lookup database to select edges that are the most effective in discriminating between different predicted classes. As another interesting application, this work [76] explores is related to the extraction of subgraphs in protein-protein interaction (PPI) network, where the downstream task is to detect the relevance of a protein to cancer. Chemistry.GNNs are being used to study molecular properties extensively and often requires explanations to better understand the model's predictions. A recent work [32] focuses on improving self-interpretability of GCNs by imposing orthogonality of node features and sparsity of the GCN's weights using Gini regularization. The intuition behind the orthogonality of features is driven by the assumption that atoms in a molecule can be represented by a linear combination of orthonormal basis of wavefunctions. Another method, APRILE [112] aims at finding the parts of a drug molecule responsible for side effects. It uses perturbation techniques to extract an explanation. In drug design, the method in [38] uses integrated gradients [92] to assign importance to atoms (nodes) and the atomic properties (node features) to understand the properties of a drug. Pathology.In medical diagnosis a challenging task is to understand the reason behind a particular diagnosis, whether it is made by a human or a machine learning system. To this end, explainer frameworks for the machine learning models become useful. In many cases the diagnosis data can be represented by graphs. This work [106] builds a graph using words, entities, clauses and sentences extracted from a patient's electronic medical record (EMR). The objective is to extract the entities most relevant for the diagnosis by training an edge mask, and is achieved by minimizing the sum of the elements in the mask matrix. Another method [37] focuses on generating explanations for histology (micro-anatomy) images. It first converts the image into a graph of biological entities, where the nodes could be cells, tissues or some other task specific biological features. Afterwards the standard explainer techniques described in gradient 3.1.2 or perturbation 3.1.4 based methods are used to generate the explanations. Another work [117] in this field modifies the objective to optimise for both necessity and sufficiency (Sec. 8). The explanation is generated in such a way that the mutual information between explanation subgraph and the prediction is maximized. Additionally, the mutual information between the remaining graph after removing the explanation subgraph and the prediction is minimized. ## 7 Datasets A set of synthetic as well as real-world datasets have been used for evaluating the proposed explainers in several tasks such as node classification and graph classification. Table 11 lists down the set of datasets and the corresponding explanation types and tasks used in the literature. ### Synthetic datasets Annotating ground truth explanations in graph data is laborious and requires domain expertise. To overcome this challenge, several explainers have been evaluated using synthetic datasets that are created using certain motifs as ground truth values. We highlight _six_ popular synthetic datasets: **BA-Shapes**[115]: This graph is formed by randomly connecting a base graph to a set of motifs. The base graph is a Barabasi-Albert (BA) graph with \(300\) nodes. It includes \(80\) house-structured motifs with five nodes each, formed by a top, a middle, and a bottom node type. **BA-Community**[115]: The BA-community graph is a combination of two BA-Shapes graphs. The features of each node are assigned based on two Gaussian distributions. Also, nodes are assigned a class out of eight classes based on the community they belong to. **Tree Cycle**[115]: This consists of a 8-level balanced binary tree as a base graph. To this base graph, 80 cycle motifs with six nodes each are randomly connected. It just has two classes; one for the nodes in the base graph and another for nodes in the defined motif. **Tree Grids**[115]: This graph uses same base graph but a different motif set compared to the tree cycle graph. It uses 3 by 3 grid motifs instead of the cycle motifs. **BA-2Motifs**[56]: This is used for graph classification and has two classes. The base graph is BA graph for both the classes. However, one class has a house-structure motif and another has a 5-node cycle motif. **Spurious Motifs**[107]: With 18000 graphs in the dataset, each graph is a combination of one base \(S\) (Tree, Ladder or Wheel) and one motif \(C\) (Cycle, House, Crane). Ground-truth \(Y\) is determined by the motif. A spurious relation between \(S\) and \(Y\) is manually induced. This spurious correlation can be varied based on a parameter that ranges from \(0\) to \(1\). \begin{table} \begin{tabular}{c c c c c} \hline \hline **Dataset** & **References** & **Nature** & **Explanation Type** & **Task** \\ \hline BA-Shapes & [115, 97, 124, 56, 49, 55, 4] & Synthetic & Compared to Motif & Node classification \\ BA-Community & [85, 115, 56, 4, 57] & Synthetic & Compared to Motif & Node classification \\ Tree Cycle & [49, 56, 124, 4, 55] & Synthetic & Compared to Motif & Node classification \\ Tree Grids & [115, 56, 122, 49, 4, 55] & Synthetic & Compared to Motif & Node classification \\ BA-Motif & [56, 122, 69, 4] & Synthetic & Compared to Motif & Graph classification \\ Spurious Motifs & [69, 107] & Synthetic & Compared to Motif & Graph classification \\ Mutagenicity & [115, 56, 49, 125, 122, 4, 119] & Real-World & Compared to Chemical property & Graph classification \\ NCI1 & [49, 4] & Real-World & Compared to Chemical property & Graph classification \\ BBBP & [125, 93, 108] & Real-World & Compared to Chemical property & Graph classification \\ Tox21 & [73, 108] & Real-World & Compared to Chemical property & Graph classification \\ MNIST-75sp & [97, 69, 43] & Real-World & Visual & Graph classification \\ Sentiment Graphs & [122, 69, 125, 121] & Real-World & Visual & Graph classification \\ \hline \hline \end{tabular} \end{table} Table 11: It shows the datasets for different categories, explanation types and tasks. ### Real-world datasets Due to the known chemical properties of the molecules, molecular graph datasets become a good choice for evaluating the generated explanation structure. We highlight some widely used molecular datasets for evaluating explainers in the _graph classification task_. **Mutag**[14]: This consists of \(4337\) molecules (graphs) with two classes based on the mutagenic effect. Using domain knowledge, specific chemical groups are assigned as ground truth explanations. **NCI1**[99]: It is a graph classification dataset with 4110 instances. Each graph is a chemical compound where a node represents an atom and an edge represents a bond between atoms. Each molecule is screened for activity against non-small cell lung cancer or ovarian cancer cell lines. **BBBP**[108]: Similar to Mutag, Blood-brain barrier penetration (BBBP) is also a molecule classification dataset with two classes with 2039 compounds. Classification is based on their permeability properties. **Tox21**[108]: This dataset consists of 7831 molecules with 12 different categories of chemical compounds. The categorization is based on the chemical structures and properties of those compounds. Visual explanation can be an important component of comparing explainers. Hence, researchers also use datasets that do not have ground truth explanations but can be visually evaluated through generated examples. Below are some of the datasets used for visual analysis: **MNIST-75sp**[43]: An MNIST image is converted to a super-pixel graph with at most 75 nodes, where each node denotes a "super pixel". Pixel intensity and coordinates of their centers of masses are used as the node attributes. Edges are formed based on the spatial distance between the super-pixel centers. Each graph is assigned one of the 10 MNIST classes, i.e., numerical digits. **Sentiment Graphs**[121]: Graph SST2, Graph SST5, and Graph Twitter are based on text sentiment analysis data of SST2, SST5, and Twitter datasets. A graph is constructed by considering tokens as nodes, relations as edges, and sentence sentiment as its label. The BERT architecture is used to obtain 768-dimensional word embeddings for the dataset. The generated explanation graph can be evaluated for its textual meaning. ## 8 Evaluation The evaluation of the explainer methods is based on the quality of the explainer's ability to generate human-intelligible explanations about the model prediction. As this might be subjective depending on the applications in hand, the evaluation measures consider both quantitative and qualitative metrics. ### Quantitative Evaluation Quantitative evaluation metrics help in having a standardized evaluation that is free of human bias. For this, explainability is posed as a binary classification problem. The explainers assign a score to the _node features_, _edges_, and _motifs_, which are the most responsible for the prediction according to the explainer. We are also provided with the ground-truth binary labels for the features and structures based, denoting whether they are responsible for the prediction or not. The explainer is then evaluated by comparing these scores to the ground-truth explanation labels using different methods: **Accuracy**[55, 93]: To find the accuracy, the top-\(k\) edges produced by the explainer are set to be positive, and the rest are negative. These top-k edges and the ground-truth labels are compared to compute the accuracy. **Area Under Curve (AUC)**[124, 102, 4]: We compare the top-\(k\) raw scores directly against the ground-truth labels by computing the area under the ROC curve. **Fidelity**[116, 55, 4]: This is used for explainers that generate a subgraph as the explanation. It compares the performances of the base GNN model on the input graph and the explainer subgraph. Let \(N\) be the number of samples, \(y_{i}\) is the true label of sample \(i\), \(\hat{y}_{i}\) is the predicted label of sample \(i\), \(\hat{y}^{k}\) is the predicted label after choosing the subgraph formed by nodes with top-\(k\)% nodes, and \(\mathbbm{1}[\cdot]\) is the indicator function. Fidelity measures how close the predictions of the explanation sub-graph are to the input graph. For factual explainers, the lower this value, the better is the explanation. It is formally defined as follows: \[\text{Fidelity}=\frac{1}{N}\sum_{i=1}^{N}\mathbbm{1}[y_{i}=\hat{y}_{i}]-\mathbbm{1 }[y_{i}=\hat{y}_{i}^{k}]\] **Sparsity [116; 55]:** It measures the conciseness of explanations (e.g., the sub-graphs) that are responsible for the final prediction. Let \(|p_{i}|\) and \(|G_{i}|\) denote the number of edges in the explanation, and the same in the original input graph, respectively. The sparsity is then defined as follows: \[\text{Sparsity}=1-\frac{1}{N}\sum_{i=1}^{N}\frac{|p_{i}|}{|G_{i}|}\] **Robustness [4]:** It quantifies how resistant an explainer is to perturbations on input graph. Here perturbations are addition or deletion of edges randomly such that it does not change the prediction of the underlying GNN. The robustness is the percentage of graphs for which these perturbations do not change the explanation. **Probability of Sufficiency (PS) [93; 9]:** It is the percentage of graphs for which the explanation graph is sufficient to generate the same prediction as the original input graph. **Probability of Necessity (PN) [93; 9]:** It is the percentage of graphs for which the explanation graph when removed from the original input graph will alter the prediction made by the GNN. **Generalization [85]:** This measures the capability of generalization of the explainer method in an inductive setting. To measure this, the training dataset size is usually varied and the AUC scores are computed for these tests. Generalisation plays an important role in explanability as the good generalizable models are generally sparse in terms of inputs. This metric is highly relevant for the self-interpretable models. ### Qualitative Evaluation Explanations can also be evaluated qualitatively using expert domain knowledge. This mode of evaluation is crucial especially while working with real-world datasets that do not have ground truth labels. **Domain Knowledge [115]:** Generated explanations can be evaluated for their meaning in the application domain. For example, GNNExplainer [115] correctly identifies the carbon ring as well as chemical groups NH2 and NO2, which are known to be mutagenic. **Manual Scoring [97]:** Another method of evaluating the explanations is by asking users (e.g., domain experts) to score, say on a scale of 1-10, the explanations generated by various explainers and compare them. One can also use RMSE scores to quantitatively compare these explainers based on the scores. ## 9 Future Directions **Combinatorial problems:** Most of the existing explanation frameworks are for prediction tasks such as node and graph classification. However, graphs are prevalent in various domains, such as in social networks [39; 68], healthcare [103], and infrastructure development [66; 65]. Solving combinatorial optimization problems on graphs is a common requirement in these domains. Several architectures based on Graph Neural Networks (GNNs) [41; 61; 78] have been proposed to tackle these problems that are usually computationally hard. However, the explainability of these methods for such combinatorial problems is largely missing. One potential direction is to build frameworks that can explain the behavior of the solution set in such problems. **Global methods:** Most explainers primarily adopt a local perspective by generating examples specific to individual input graphs. From global explanations, we can extract higher-level insights that complement the understanding gained from local explanations (see details on global methods in Sec. 5.2). Moreover, global explanations can be easily understood by humans even for large datasets. Real-world graph datasets often consist of millions of nodes. When generating explanations specific to each instance, the number of explanations increases proportionally with the size of the dataset. As a result, the sheer volume of explanations becomes overwhelming for human cognitive capabilities to process effectively. Global approaches can immensely help in these scenarios. **Visualization and HCI tools:** Graph data, unlike textual and visual data, cannot be perceived by human senses. Thus, qualitative evaluation of explanation becomes a non-trivial problem and often requires expert guidance [115, 97]. This makes crowdsourcing evaluations difficult and not scalable. Other ways to qualitatively assess graph structures for explanation of a certain prediction can be explored. Additionally, since explainability is human-centric, it is crucial that explainers are influenced by human cognition and behavior, particularly those of domain experts [48] while using GNNs in making important decisions [67]. HCI research can help in designing the interface for the experts to assess the generated explanation graphs [55]. **Temporal GNNs:** Temporal graph models are designed to predict the graph structure and labels in the future by exploiting how the graph has evolved in the past. This increases the complexity of explanations significantly as they now involve combinations of graph structures at different time intervals. Existing methods [20, 110, 31, 44] mostly focus on discrete-time models where graphs are provided at different points in time. Future works can explore ways to explain the prediction of a continuous-time dynamic graph model, where interactions happen in real time [109]. One direction could be to optimize over a parameterized temporal point process [95]. ## 10 Conclusions In this survey, we have provided a comprehensive overview of explanation methods for Graph Neural Networks (GNNs). Besides outlining some background on GNNs and explainability, we have presented a detailed taxonomy of the papers from the literature. By categorizing and discussing these methods, we have highlighted their strengths, limitations, and applications in understanding GNN predictions. Moreover, we have highlighted some widely used datasets and evaluation metrics in assessing the explainability of GNNs. As GNNs continue to play a significant role in various fields, such as healthcare, recommendation systems, and natural language processing, the need for interpretable and transparent models becomes increasingly important. Overall, we believe this survey serves as a valuable resource for researchers and practitioners interested in the explainability of GNNs and provides a foundation for further advancements in interpretable graph representation learning.
2301.07424
Autonomous Slalom Maneuver Based on Expert Drivers' Behavior Using Convolutional Neural Network
Lane changing and obstacle avoidance are one of the most important tasks in automated cars. To date, many algorithms have been suggested that are generally based on path trajectory or reinforcement learning approaches. Although these methods have been efficient, they are not able to accurately imitate a smooth path traveled by an expert driver. In this paper, a method is presented to mimic drivers' behavior using a convolutional neural network (CNN). First, seven features are extracted from a dataset gathered from four expert drivers in a driving simulator. Then, these features are converted from 1D arrays to 2D arrays and injected into a CNN. The CNN model computes the desired steering wheel angle and sends it to an adaptive PD controller. Finally, the control unit applies proper torque to the steering wheel. Results show that the CNN model can mimic the drivers' behavior with an R2-squared of 0.83. Also, the performance of the presented method was evaluated in the driving simulator for 17 trials, which avoided all traffic cones successfully. In some trials, the presented method performed a smoother maneuver compared to the expert drivers.
Shafagh A. Pashaki, Ali Nahvi, Ahmad Ahmadi, Sajad Tavakoli, Shahin Naeemi, Salar H. Shamchi
2023-01-18T10:47:43Z
http://arxiv.org/abs/2301.07424v1
# Autonomous Slalom Maneuver Based on Expert Drivers' Behavior Using Convolutional Neural Network ###### Abstract **Lane changing and obstacle avoidance are one of the most important tasks in automated cars. To date, many algorithms have been suggested that are generally based on path trajectory or reinforcement learning approaches. Although these methods have been efficient, they are not able to accurately imitate a smooth path traveled by an expert driver. In this paper, a method is presented to mimic drivers' behavior using a convolutional neural network (CNN). First, seven features are extracted from a dataset gathered from four expert drivers in a driving simulator. Then, these features are converted from 1D arrays to 2D arrays and injected into a CNN. The CNN model computes the desired steering wheel angle and sends it to an adaptive PD controller. Finally, the control unit applies proper torque to the steering wheel. Results show that the CNN model can mimic the drivers' behavior with an R\({}^{2}\)-squared of 0.83. Also, the performance of the presented method was evaluated in the driving simulator for 17 trials, which avoided all traffic cones successfully. In some trials, the presented method performed a smoother maneuver compared to the expert drivers.** _Keywords--lane-changing, obstacle avoidance, convolutional neural networks, deep learning, adaptive PD._ ## I Introduction Autonomous driving will be an inextricable part of future automated vehicles. These systems can potentially facilitate the use of cars and reduce accidents. Many car crashes are directly related to human error, fatigue, etc. More than 94% of accidents occur due to driving faults [1]. 539000 accidents occur due to wrong lane-changing in the US annually [2]. So, it is important to equip cars with new smart systems to reduce the number of accidents in lane changes. To date, various partially and fully autonomous driving systems have been developed including parallel park systems, adaptive cruise control, autonomous emergency braking, and automatic lane-changing system [3]. Among them, automatic lane-changing system is more complex; hence, it is challenging to develop a proper and safe lane-changing system [4]. Therefore, it's essential to precisely develop a lane-changing system, considering the safety of the passengers in addition to avoiding crashes. So, it's significant that the automated car equipped with a lane-changing system must take a safe route smoothly (not aggressively or clumsily) in order to fulfill the passengers' safety and comfort [5]. To overcome the mentioned complexity of the lane-changing system, researchers have developed different methods. Overall, these methods can be divided into two main categories. The first category is path trajectory-based approaches, where a geometric path is designed as the reference path and a controller steers the car on the reference path. The reference path is usually a third-degree or higher-degree polynomial curve, and the controller could be PID, adaptive PID, MPC, fuzzy controller, etc. For example, Wang et al. [6] considered a seventh-degree polynomial curve to develop a lane-changing system. They preferred using a seventh-degree polynomial instead of lower-degree ones as the lower-degree polynomial curves are not as smooth as higher-degree ones [6, 7]. In another work, Chowdhri et al. [8] designed a new automatic system to perform an evasive lane-changing maneuver by tracking a desired reference path leveraging a nonlinear MPC. They also took the brake system into account and entered the dynamics of the brake system in the designed controller. The second category is reinforcement learning-based approaches through which an agent can learn a task by trial-and-error. These approaches are commonly model-free and make the agent interacts with a stochastic environment based on the states and immediate rewards and the agent tries to maximize the long-term reward. The initial mathematical model of reinforcement learning (RL) algorithms is based on Markov decision process, through which the discrete stochastic environment for the agent is formalized [9]. Aiming at developing an RL-based lane-changing system, Ye et al. [10] devised a new lane-changing system using proximal policy optimization-based deep reinforcement learning. Also, in another endeavor, Mirkevska et al. [11] presented a new reinforcement learning-based approach combined with formal safety verification. They exploited 13 features and a deep Q-network to gain a fast learning rate; the simulation results showed that the designed system worked well. Although a plethora of works based on the two mentioned categories (path trajectory and RL-based approaches) have been conducted, and these works have achieved great successes, they are not able to imitate the behavior of expert drivers. Path trajectory-based methods are commonly based on a third-degree polynomial which is not perfectly smooth. Even though some papers suggest using higher-degree polynomial curves to circumvent the smoothness problem [6, 7], these paths cannot be a perfect alternative for the path taken by an expert driver. The smoothness problem also exists for RL-based approaches as these methods try to learn the environment based on trial-and-error; therefore, although the agent is able to find its path and finish its mission, the path might not be as smooth as the route taken by a skilled and expert driver. To address the mentioned gap, we developed a new system based on expert drivers' behavior data. We extracted some features and input them into a convolutional neural network (CNN). The output of the CNN model is the steering wheel angle that is sent to the control unit, which is based on an adaptive PD. The system is designed to mimic expert drivers' behavior. A byproduct of this paper will be to develop a haptic driving training simulation system to train novice drivers. The rest of the paper is organized as follows. The methodology employed in this research is elaborated in section II. The results and the performance of the developed system are presented in section III. Finally, discussion and conclusion are presented in sections IV and V, respectively. ## II Methodology ### _Overview of the developed system_ In this research, we devised a new system that is able to mimic expert drivers' behavior for obstacle avoidance and lane-changing. To develop the system, four experienced drivers were asked to perform a slalom maneuver made up of three lane-changing and four longitudinal movement tasks (Fig. 1). Then, the driving data of the drivers were gathered and employed to train a CNN model in order to compute the proper steering wheel angle. Then, the computed steering wheel angle is sent to a control unit which is based on an adaptive PD controller. After that, the control unit calculates an appropriate torque and applies it to the steering wheel. Fig. 2 depicts the steps of the designed system. In the next subsection, we address the data collection process. ### _Data Collection_ To collect data, four expert drivers were asked to perform a slalom maneuver with a driving simulator located in Nasir Driving Simulator Lab at the K.N. Toosi University of Technology. As shown in Fig. 1, the slalom maneuver includes three sets of cones/obstacles that lead to three lane-changing and four longitudinal movements. The four expert drivers carried out and repeated the maneuver 573 times with non-fixed speeds ranging from 15 to 60 kilometers. The data of 500 maneuvers were randomly selected for training set and the rest of the maneuvers for test set. The simulator constantly sampled and saved the driving data with a rate of 30 hertz per second. The driving data includes various information like speed, steering wheel angle, car head angle, x, y, tire angle, car horn, car gear, etc. Also, it's worth mentioning that the simulator software was developed using OpenGL and Python scripts. Fig. 3 shows the simulator located in Nasir Driving Simulator Lab. ### _Feature Extraction and the CNN model_ In this project, after carefully investigating various features and machine learning models, we came to conclude that the performance of 2D CNN models with seven input features outperforms other models such as 1D convolutional neural network, LSTM, GRU, shallow MLP, KNN, and SVR. Detailed explanations on the extracted features and the structure of the model will be presented next. As we know, the input to a CNN model must be a 2D vector. On the other hand, we extracted only seven features at each time step of driving that is actually a 1D vector. To convert these 1D vectors to 2D vectors, we constructed 5x7 matrices. We put the seven features with five different permutations in each row. As mentioned, seven features were extracted. The first is about the state of the car's lateral motion. This feature has three constant values, which are 1, 2, and 3, that express turning left, turning right, and no turn (when moving in parallel of cone sets), respectively. We designed this feature inspired by drivers' behavior as a driver observes the vacant spaces ahead and decides to turn right or left. Also, when a driver moves parallel to the obstacle, the driver decides to move straight and not turning right or left. The driver senses the circumstance by her/his eyes while in our system, this feature is obtained by the simulator at each time step. Fig. 4 demonstrates this feature. The second and third features are about lateral and longitudinal distances between the car head and the start/end of a cone set. When the car is heading a cone set, the distances are computed with regard to the start of the cone set. But, when the car is moving parallel to a cone set, the distances are calculated with regard to the end of the cone set. Fig. 5 demonstrates these features better. It's worth noting that the lateral distance is simply computed by subtracting \(\chi_{car}\) from \(\chi_{cone}\)and could be positive or negative (Eq. 1), whereas the longitudinal distance is obtained by the inverse of \(1+(x_{cone}-x_{car})\), which is observable in Eq. 2. Since \(x_{cone}\) is always bigger than \(x_{car}\) in the map of the simulator, the output of the subtraction is always positive. Consequently, the value of Eq. 2 is always between zero and one. When the car nears the start/end of an obstacle, the value of Eq. 2 nears one. Therefore, this feature becomes more meaningful. \[f2=\chi_{cone}-\chi_{car} \tag{1}\] \[f3=\frac{1}{1+(x_{cone}-x_{car})} \tag{2}\] The fourth feature is the speed of the car at each time step that is between 15 km/h and 60 km/h. The fifth feature is car head angle relative to \(y\)-axis. The sixth and seventh features are the time derivative of the car head angle change and the rate of steering wheel angle change, respectively. We did not employ the previous steps of the steering wheel angle as input features because the output of the CNN model is actually the steering wheel angle. Therefore, previous steps of steering wheel angle convert our problem into a time series problem. During the project, whenever we employed the previous steps of steering wheel angle as input features, the model acted like a predictive model and couldn't be able to determine whether the steering wheel angle was appropriate or not. In other words, the model was predicting the trend of the steering wheel angle and didn't care about performing the maneuver correctly. Therefore, we didn't consider the previous steps of steering wheel angle for the input of the CNN model. As mentioned before, we witnessed that CNN models work better for this specific task. It's highly likely that the CNN model can take the correlation between the features into consideration. The CNN model has been generally made up of three convolutional layers and three dense layers. All layers except the output layer exploit Exponential Linear Unit (ELU) function as activation function. The output layer is actually responsible for computing the steering wheel angle and employs a linear function as activation function. The Figure 3: The used simulator in this study three convolutional layers have 32, 64, and 128 kernels, respectively, with the same size of 2 by 2. The structure of the CNN model is shown in the Fig. 6. Also, Adam optimization algorithm was used as the optimizer to train and update the parameters of the model. Moreover, it's worth noting that the adaptive learning rates led to better result when training the network. Therefore, we chose an adaptive learning rate with the initial value of 0.001 for Adam algorithm. The model was trained for 18 epochs with a batch size of 4. The Mean square error (MSE) was used to evaluate the performance of the model during the training process. The training procedure of the model is demonstrated in Fig. 7. After training the model, we evaluated the network's performance in terms of MSE and R2-squared criteria with training and test sets. R2-squared criterion is between zero and one; zero means no correlation and one shows complete correlation between the output of the network and the desired values. From Table 1, we can observe that the model managed to mimic the drivers' behavior with R2-squared of 0.83 for test set. ### _Controll Unit_ In the previous part, we mentioned how the CNN model can find the patterns behind the scene of an expert driver's maneuvering. On the other hand, it can also imitate the general trend during the driving behavior in the maneuver. In this section, we're going to focus on implementing the designed model on the real steering wheel. To accomplish this part, we combined the Neural Network model and PD control-based rule: \[\theta_{d}=\ F_{NN}(features) \tag{3}\] \[e_{\theta}=\ \theta_{a}-\ \theta_{d} \tag{4}\] \[\tau_{g}=\ P_{g}e_{\theta}+\ D_{g}e_{\theta} \tag{5}\] In the above equation, \(\theta_{a}\) is the actual steering wheel angle, and \(\theta_{d}\) is the desire steering wheel angle produced by the Neural Network function which has been demonstrated by\(F_{NN}\). Using these gains (\(P_{g}\),\(D_{g}\)) alongside the PD control-based rule on each time step, \(\theta_{a}\) converges to \(\theta_{d}\). Converging \(\theta_{a}\) and \(\theta_{d}\) will produce the proper torque to stimulate the steering wheel toward the desired angle in each time step. As a result, the vehicle can perform the maneuver autonomously. More information about how \(P_{g}\) and \(D_{g}\) will be presented in the future works. ## III Results In this section, we investigate the performance of the developed system. To evaluate our method, we executed the CNN model and the controller on the simulator in real-time. We observed that our method was able to steer the car perfectly and perform the slalom maneuver without any collision. Fig. 8 depicts and compares the steering wheel angle taken by an expert driver and our method. In order to make a fair comparison, the experiment shown in Fig. 8 has been performed at the same initial location and based on the same speeds at each time step. According to Fig. 8, we can see that our method sometimes takes smoother angles. For example, by taking a meticulous look at the time interval between 5 and 10 seconds, we notice that our method takes a smoother steering wheel angle than the expert driver. Moreover, we can observe that at each positive or negative peak in Fig. 8, our system adopts a smaller angle compared to what the expert driver chooses, leading to a smoother obstacle avoidance. Also, the path and the coordination of the car at each time step for the expert driver and our system is shown in Fig. 9. In Fig. 9, we can see that our system crossed the cone sets smoothly and without any collision. Fig. 10 shows the paths taken by the car in 17 trials. To perform the trials, the designed slalom maneuver was repeated 17 times at different longitudinal speeds. Our algorithm steered the car through the cone sets (red rectangles in Fig. 10) without any collision. It's worth mentioning that during the trials, our system only controlled the steering wheel and did not have any control over the accelerator pedal. This pedal was pressed at the discrete of the human driver. Therefore, as the car speed was not constant and was constantly changed by the driver, it was a big challenge for the algorithm to adapt itself and steer the car properly. The curves of speed and steering wheel angle for the mentioned 17 trials are illustrated in Fig. 11 and Fig. 12. CNN model with the data to mimic the drivers' behavior and approximate the appropriate steering wheel angle. The initial results are promising by examining the experimental results of the developed system on the simulator. The developed system was able to perform a complex slalom maneuver which constituted of three sets of cones. Finally, we can conclude that, contrary to the common approaches like path trajectory and RL-based approaches, which have a specific and certain path or policy, training supervised algorithms could be an alternative method. However, this attitude has its merits and demerits. Mimicking the expert driver's behavior could be counted as a merit for this approach. Also, these methods could perfectly outperform the other mentioned approaches if the designer of the system extracts more meaningful features and also manipulates the cost function of the model, as is common in physics-informed neural works [12]. Regarding demerits, it should be said that gathering a dataset for each scenario could be time-consuming. ## V Conclusion In this paper, a new method was developed to perform slalom maneuvers by mimicking expert drivers' behavior. Seven meaningful features were extracted from the dataset gathered from four drivers. The features were converted to 2D vectors and then employed as the input of a CNN model, which is responsible for approximating the appropriate steering wheel angle. Then, the steering wheel angle computed by the CNN model was sent to the control unit, which consists of an adaptive PD controller. The control unit applies proper torque to the steering wheel of the driving simulator. Finally, the car performs the slalom maneuver and crosses the obstacle without any collision. We observed that the designed system functions well and is able to perform the maneuver smoothly. Also, by comparing the curves of steering wheel angle taken by an expert driver and our method, we observed that not only the performance of the developed method is close to the expert driver (R2-squared of 0.83 based on Table 1), but also our method crosses the obstacles more safely and smoothly. In the future, we will develop a haptic driving training simulator that is able to teach novice drivers how to avoid obstacles.
2307.13907
Robustness Verification of Deep Neural Networks using Star-Based Reachability Analysis with Variable-Length Time Series Input
Data-driven, neural network (NN) based anomaly detection and predictive maintenance are emerging research areas. NN-based analytics of time-series data offer valuable insights into past behaviors and estimates of critical parameters like remaining useful life (RUL) of equipment and state-of-charge (SOC) of batteries. However, input time series data can be exposed to intentional or unintentional noise when passing through sensors, necessitating robust validation and verification of these NNs. This paper presents a case study of the robustness verification approach for time series regression NNs (TSRegNN) using set-based formal methods. It focuses on utilizing variable-length input data to streamline input manipulation and enhance network architecture generalizability. The method is applied to two data sets in the Prognostics and Health Management (PHM) application areas: (1) SOC estimation of a Lithium-ion battery and (2) RUL estimation of a turbine engine. The NNs' robustness is checked using star-based reachability analysis, and several performance measures evaluate the effect of bounded perturbations in the input on network outputs, i.e., future outcomes. Overall, the paper offers a comprehensive case study for validating and verifying NN-based analytics of time-series data in real-world applications, emphasizing the importance of robustness testing for accurate and reliable predictions, especially considering the impact of noise on future outcomes.
Neelanjana Pal, Diego Manzanas Lopez, Taylor T Johnson
2023-07-26T02:15:11Z
http://arxiv.org/abs/2307.13907v1
Robustness Verification of Deep Neural Networks using Star-Based Reachability Analysis with Variable-Length Time Series Input ###### Abstract Data-driven, neural network (NN) based anomaly detection and predictive maintenance are emerging research areas. NN-based analytics of time-series data offer valuable insights into past behaviors and estimates of critical parameters like remaining useful life (RUL) of equipment and state-of-charge (SOC) of batteries. However, input time series data can be exposed to intentional or unintentional noise when passing through sensors, necessitating robust validation and verification of these NNs. This paper presents a case study of the robustness verification approach for time series regression NNs (TSRegNN) using set-based formal methods. It focuses on utilizing variable-length input data to streamline input manipulation and enhance network architecture generalizability. The method is applied to two data sets in the Prognostics and Health Management (PHM) application areas: (1) SOC estimation of a Lithium-ion battery and (2) RUL estimation of a turbine engine. The NNs' robustness is checked using star-based reachability analysis, and several performance measures evaluate the effect of bounded perturbations in the input on network outputs, i.e., future outcomes. Overall, the paper offers a comprehensive case study for validating and verifying NN-based analytics of time-series data in real-world applications, emphasizing the importance of robustness testing for accurate and reliable predictions, especially considering the impact of noise on future outcomes. Keywords:Predictive Maintenance Time Series Data Neural Network Verification Star-set Reachability Analysis Noise Robustness Verification Prognostics and Health Management ## 1 Introduction Over time, Deep Neural Networks (DNNs) have shown tremendous potential in solving complex tasks, such as image classification, face detection, object detection, speech recognition, natural language processing, document analysis, etc., sometimes even outperforming humans [15, 16, 17]. This has motivated a spurt in investigating the applicability of DNNs in numerous real-world applications, such as biometrics authentication, face authentication for mobile locking systems, malware detection, different bioinformatics applications, etc. In dealing with such susceptible information in these critical areas, safety, security, and verification thereof have become essential design considerations. Unfortunately, it has been demonstrated that state-of-the-art well-trained networks can be easily deceived by minimal perturbations in the input leading to erroneous predictions [11, 23, 35]. The most researched domain for verification of such networks involves image inputs, particularly safety and robustness checking of various classification neural networks [4, 7, 12, 22, 37, 40]. Previous research has analyzed feed-forward neural networks (FFNN [38]), convolutional neural networks (CNN [37]), and semantic segmentation networks (SSN [40]) using different set-based reachability tools, such as Neural Network Verification (NNV [19, 41]) and JuliaReach [5], among others. Input perturbations are not only confined to image-based networks but also have been extended to other input types, including time series data or input signals with different noises in predictive maintenance applications [8, 42]. One such use case is in the manufacturing industry, where data from process systems, such as IoT sensors and industrial machines, are stored for future analysis [10, 30]. Data analytics in this context provide insights and statistical information and can be used to diagnose past behavior [20, 45], and predicts future behavior [6, 18, 34], maximizing industry production. This application is not only limited to manufacturing, but is also relevant in fields like healthcare digitalization [36, 44] and smart cities [32, 33]. Noisy input data, here, refers to data containing errors, uncertainties, or disturbances, caused by factors like sensor measurement errors, environmental variations, or other noise sources. While NN applications with image data have received significant attention, little work has been done in the domain of regression-type model verification, particularly with time series data in predictive maintenance applications. Regression-based models with noisy data are crucial for learning data representations and predicting future values, enabling fault prediction and anomaly detection in high-confidence, safety-critical systems [13, 28]. This motivated us to use verification techniques to validate the output of regression networks and ensure that the output(s) fall within a specific safe and acceptable range. Contributions. 1. In this paper, we primarily focus on exploring a new case study, specifically examining time-series-based neural networks in two distinct industrial predictive maintenance application domains. We utilize the established concept of star-set-based reachability methods to analyze whether the upper and lower bounds of the output set adhere to industrial guidelines' permissible bounds. We develop our work1 as an extension of the NNV tool to formally analyze and explore regression-based NN verification for time series data using sound and deterministic reachability methods and experiment on different discrete time signals to check if the output lies within pre-defined safe bounds. 2. Another significant contribution of our work is the flexibility of variable-length inputs in neural networks. This approach simplifies input manipulation and enhances the generalizability of network architectures. Unlike published literature that relied on fixed-sized windows [9, 24], which necessitated preprocessing and experimenting with window sizes, our method allows for flexibility in utilizing any sequence length. This flexibility improves the generalizability of reachability analysis. 3. We run an extensive evaluation on two different network architectures in two different predictive maintenance use cases. In terms of evaluation, we have introduced a novel robustness measure called Percentage Overlap Robustness (POR). Unlike the existing Percentage Sample Robustness (PR/PSR) [40], which considers only instances where reachable bounds remain entirely within permissible bounds, the proposed POR accounts for all instances with overlap. 4. Finally, we develop insights on evaluating the reachability analysis on those networks and possible future direction. Outline.The paper is organized as follows: Section 2 provides the necessary context for the background; Section 3 details the adversarial noises; Section 4 defines the verification properties; Section 5 explains the reachability calculations for layers to accommodate variable-length input; Section 6 defines the research problem, and Section 7 describes the methodology, including dataset, network models, and input attacks. Section 8 presents the experimental results, evaluation metrics, and their implications. Finally, Section 9 summarizes the main findings and suggests future research directions. ## 2 Preliminaries This section introduces some basic definitions and descriptions necessary to understand the progression of this paper and the necessary evaluations on time series data. ### Neural Network Verification Tool and Star Sets The Neural Network Verification (NNV) tool is a framework for verifying the safety and robustness of neural networks [19, 41]. It analyzes neural network behavior under various input conditions, ensuring safe and correct operation in all cases. NNV supports reachability algorithms like the over-approximate star set approach [37, 39], calculating reachable sets for each network layer. These sets represent all possible network states for a given input, enabling the verification of specific safety properties. NNV is particularly valuable for safety-critical applications, such as autonomous vehicles and medical devices, ensuring neural networks are trustworthy and reliable under all conditions, maintaining public confidence. For this paper, we have implemented the work as an extension of NNV tool and used the star [Def. 1] based reachability analysis to get the reachable sets of the neural networks at the outputs. Definition 1: A generalized star set (or simply star) \(\Theta\) is a tuple \(\langle c,V,P\rangle\) where \(c\in\mathbb{R}^{n}\) is the center, \(V=\{v_{1},v_{2},\cdots,v_{m}\}\) is a set of m vectors in \(\mathbb{R}^{n}\) called basis vectors, and \(P:\mathbb{R}^{m}\rightarrow\{\top,\bot\}\) is a predicate. The basis vectors are arranged to form the star's \(n\times m\) basis matrix. The set of states represented by the star is given as: \[\llbracket\Theta\rrbracket=\{x\ |\ x=c+\Sigma_{i=1}^{m}(\alpha_{i}v_{i})\ \text{and}\ P(\alpha_{1},\cdots,\alpha_{m})=\top\}. \tag{1}\] In this work, we restrict the predicates to be a conjunction of linear constraints, \(P(\alpha)\triangleq C\alpha\leq d\) where, for \(p\) linear constraints, \(C\in\mathbb{R}^{p\times m}\), \(\alpha\) is the vector of \(m\)-variables, i.e., \(\alpha=[\alpha_{1},\cdots,\alpha_{m}]^{T}\), and \(d\in\mathbb{R}^{p\times 1}\). An alternative approach to defining a Star set for time series data involves using the upper and lower bounds of the noisy input while centering the actual input. These bounds on each input parameter, along with the predicates, create the complete set of constraints the optimizer will solve to generate the initial set of states. A 4 \(\times\) 4 time series data with a bounded disturbance \(b\in[-2,2]\) applied on the time instance 2 of feature 1, i.e., position (1, 2) can be described as a Star depicted in Fig. 1. ### Time Series and Regression Neural Network #### 2.2.1 Signal. The definition of a'signal' varies depending on the applicable fields. In the area of signal processing, a _signal_\(S\) can be defined as some physical quantity that varies with respect to (w.r.t.) some independent dimension (e.g., space or time) [26]. In other words, a signal can also be thought of as a function that carries information about the behavior of a system or properties of some physical process [27]. \[S=g(q) \tag{2}\] where \(q\) is space, time, etc. Depending on the nature of the spaces signals are defined over, they can be categorized as discrete or continuous. Discrete time signals are also known as time series data. We next define the specific class of signals considered in this paper, namely time series. Figure 1: Star for Time-series Data with four Feature Values (rows) with four time-steps (columns) Definition 2: A **time series signal**\(S_{T}\) is defined as an ordered sequence of values of a variable (or variables) at different time steps. In other words, a time series signal is an ordered sequence of discrete-time data of one or multiple features2. Footnote 2: Each **feature** is a measurable piece of data that is used for analysis. \[\begin{split} S_{T}=s_{t_{1}},s_{t_{2}},s_{t_{3}},...\\ T=t_{1},t_{2},t_{3},...\end{split} \tag{3}\] where, \(t_{1},t_{2},t_{3},\ldots\) is an ordered sequence of instances in time \(T\) and \(S_{T}=s_{t_{1}},s_{t_{2}},s_{t_{3}},\ldots\) are the signal values at those time instances for each \(t=t_{i}\). Here, sometimes we have used'signal' to refer to the 'time series signal.' Next, we define the specific types of neural networks considered in this paper, namely regression neural networks (specifically time series regression neural networks). Definition 3: A **time series regression neural network (TSRegNN)**\(f\) is a nonlinear/partially-linear function that maps each time-stamped value \(x(i,j)\) (for \(i^{th}\) feature and \(j^{th}\) timestamp) of a single or multifeatured time series input \(\boldsymbol{x}\) to the output \(\boldsymbol{y}\). \[f:\ \boldsymbol{x}\in\mathbb{R}^{n_{f}\times t_{s}}\rightarrow\boldsymbol{y} \in\mathbb{R}^{p\times q} \tag{4}\] where \(t_{s},n_{f}\) are the time-sequence length and the number of features of the input data, respectively, \((j,i)\in\{1,\ldots,t_{s}\}\times\{1,\ldots,n_{f}\}\) are the time steps and corresponding feature indices, respectively, and \(p\) is the number of values present in the output, while \(q\) is the length of each of the output values; it can either be equal to \(t_{s}\) or not, depending on the network design. Here, each row of \(\boldsymbol{x}\) represents a timestamped feature variable. ### Reachability of a Time Series Regression Network In this section, we provide a description of how the reachability of a NN layer and the NN as a whole is computed for this study. Definition 4: A **layer**\(L\) of a TSRegNN is a function \(h:\ u\in\mathbb{R}^{j}\to v\in\mathbb{R}^{p}\), with input \(u\in R^{j}\) and output \(v\in R^{p}\) defined as follows \[v=h(u) \tag{5}\] where the function \(h\) is determined by parameters \(\theta\), typically defined as a tuple \(\theta=\langle\sigma,W,b\rangle\) for fully-connected layers, where \(W\in R^{j\times p},b\in R^{p}\), and activation function \(\sigma:R^{j}\to R^{p}\). Thus, the fully connected NN layer is described as \[v=h(u)=\sigma(\,\textbf{W}\!\times u+\textbf{b}) \tag{6}\] For convolutional NN-Layers, \(\theta\) may include parameters like the filter size, padding, or dilation factor, and the function in Eq. 6 may need alterations. Definition 5: Let \(h:\ u\in\mathbb{R}^{j}\to v\in\mathbb{R}^{p}\), be a NN layer as described in Eq. 5. The **reachable set**\(R_{h}\), with input, \(I\in R^{n}\) is defined as \[\mathcal{R}_{h}\triangleq\{v\ |\ v=h(u),\ u\in\mathcal{I}\} \tag{7}\] **Reachability analysis (or shortly, reach) of a TSRegNN \(f\)** on Star input set \(I\) is similar to the reachable set calculations for CNN [37] or FFNN [38], the only difference being both the previous works had been done for classification networks. \[Reach(f,I):\ I\rightarrow\mathcal{R}_{ts} \tag{8}\] We call \(\mathcal{R}_{ts}(I)\) the _output reachable set_ of the TSRegNN corresponding to the input set \(I\). For a regression type NN, the output reachable set can be calculated as a step-by-step process of constructing the reachable sets for each network layer. \[\mathcal{R}_{L_{1}} \triangleq\{v_{1}\ |\ v_{1}=h_{1}(x),\ x\in\mathcal{I}\},\] \[\mathcal{R}_{L_{2}} \triangleq\{v_{2}\ |\ v_{2}=h_{2}(v_{1}),\ v_{1}\in\mathcal{R}_{L_{1}}\},\] \[\vdots\] \[\mathcal{R}_{ts}=\mathcal{R}_{L_{k}} \triangleq\{v_{k}\ |\ v_{k}=h_{k}(v_{k-1}),\ v_{k-1}\in\mathcal{R}_{L_{k-1}}\},\] where \(h_{k}\) is the function represented by the \(k^{th}\) layer \(L_{k}\). The reachable set \(\mathcal{R}_{L_{k}}\) contains all outputs of the neural network corresponding to all input vectors \(x\) in the input set \(\mathcal{I}\). ## 3 Adversarial Noise In the case of time series samples, while the sensor transmits the sampled data, sensor noises might get added to the original data. One example of such noise is sensor vibration, but sometimes the actual sources are not even known by the sensor providers [21]. Definition 6: A **noise** can be defined as some unintentional, usually small-scaled signal which, when added to the primary signal, can cause malfunctioning of the equipment in an industrial premise. Mathematically, a noisy signal \(s^{noise}\) can be produced by a linear parameterized function \(g_{\epsilon,s^{noise}}(\cdot)\) that takes an input signal and produces the corresponding noisy signal. \[s^{noise}=g_{\epsilon,s^{noise}}(s)=s+\Sigma_{i=1}^{n}\epsilon_{i}\cdot s_{i }^{noise} \tag{9}\] For time series data, we can also assume the noise as a set of unit vectors associated with a coefficient vector \(\epsilon\) at each time step \(i\), where the value of the coefficient vector \(\epsilon\) is unknown but bounded within a range \([\underline{\epsilon},\overline{\epsilon}]\), i.e., \(\underline{\epsilon}_{i}\leq\epsilon_{i}\leq\overline{\epsilon_{i}}\). #### 3.3.2 Types of Possible Noises. For an input sequence with \(t_{s}\) number of time instances and \(n_{f}\) number of features, there can be four types of noises (\(l_{\infty}\) norm) [A.1], based on its spread on the signal. They can be categorized as below: 1. **Single Feature Single-instance Noise (SFSI)** i.e., perturbing a feature value only at a particular instance (\(t\)) by a certain percentage around the actual value. \[s^{noise}=g_{\epsilon,s^{noise}}(s)=s+\epsilon_{t}\cdot s_{t}^{noise}\] (10) 2. **Single Feature All-instances Noise (SFAI)** i.e., perturbing a specific feature throughout all the time instances by a certain percentage around the actual values of a particular feature. \[s^{noise}=g_{\epsilon,s^{noise}}(s)=s+\Sigma_{i=1}^{n}\epsilon_{i}\cdot s_{ i}^{noise}\] (11) 3. **Multifeature Single-instance Noise (MFSI)** i.e., perturbing all feature values but only at a particular instance (t), following Eq. 10 for all features. 4. **Multifeature All-instance Noise (MFAI)** i.e., perturbing all feature values throughout all the instances, following Eq. 11 for all features. A sample plot for all four types of noises is shown in [A.8]. ## 4 Verification Properties Verification properties can be categorized into two types: local properties and global properties. A local property is defined for a specific input \(x\) at time-instance \(t\) or a set of points \(X\) in the input space \(R^{n_{f}\times t_{s}}\). In other words, a local property must hold for certain specific inputs. On the other hand, a global property [43] is defined over the entire input space \(R^{n_{f}\times t_{s}}\) of the network model and must hold for all inputs without any exceptions. #### 4.0.1 Robustness. Robustness refers to the ability of a system or a model to maintain its performance and functionality under various challenging conditions, uncertainties, or perturbations. It is a desirable quality that ensures the system's reliability, resilience, and adaptability in the face of changing or adverse circumstances. For an input perturbation measured by \(\delta\) and admissible output deviation \(\epsilon\), the 'delta-epsilon' formulation for the desired robustness property can be written as: \[||x^{\prime}-x||_{\infty}<\delta\implies||f(x^{\prime})-f(x)||_{\infty}<\epsilon \tag{12}\] where \(x\) is the original input belonging to the input space \(R^{n_{f}\times t_{s}}\), \(x^{\prime}\) is the noisy input, \(f(x^{\prime})\) and \(f(x)\) are NN model outputs for, respectively, \(x^{\prime}\) and \(x\), \(\delta\) is the max measure of the noise added, \(\epsilon\) is the max deviation in the output because of the presence of noise (\(\delta,\epsilon\in\mathbf{R}>0\)). Local Robustness.Given a TSRegNN \(f\) and an input time series signal \(S\), the network is called **locally robust** to any noise \(\mathcal{A}\) if and only if: the estimated output reachable bounds for a particular time-step corresponding to the noisy input lie between predefined allowable bounds w.r.t to the actual signal. **Robustness Value (RV)** of a time series signal \(S\) is a binary variable, which indicates the local robustness of the system. RV is 1 when the estimated output range for a particular time instance (t) lies within the allowable range, making it locally robust at t; otherwise, RV is 0. \(RV=1\iff LB_{t}{}^{est}\geq LB_{t}{}^{allow}\wedge UB_{t}{}^{est}\leq UB_{t}{}^ {allow}\) else, RV = 0 where \(LB_{t}{}^{est}\) and \(UB_{t}{}^{est}\) are estimated bounds and \(LB_{t}{}^{allow}\) and \(UB_{t}{}^{allow}\) are allowable bounds. Definition 7: Percentage Sample Robustness (PR) of a TSRegNN corresponding to any noisy input is defined as \[PR=\frac{N_{robust}}{N_{total}}\times 100\%, \tag{13}\] where \(N_{robust}\) is the total number of robust time instances, and \(N_{total}\) = the total number of time steps in the time series signal. Percentage robustness can be used as a measure of **global robustness**[43] of a TSRegNN w.r.t any noise. In this study, we adapt the concept of Percentage Robustness (PR) previously used in image-based classification or segmentation neural networks [40] to time-series inputs. PR in those cases assessed the network's ability to correctly classify/segment inputs even with input perturbations for a given number of images/pixels. We extend this concept to analyze the robustness of time-series inputs in our research. Definition 8: Percentage Overlap Robustness (POR) of a TSRegNN corresponding to any noisy input is defined as \[POR=\frac{\Sigma_{i=1}^{N_{total}}(PO_{i})}{N_{total}}\times 100\%, \tag{14}\] where \(N_{total}\) = total number of time instances in the time series signal, and \(PO_{i}\) is the percentage overlap between estimated and allowed ranges at each time step w.r.t the estimated range \[PO=\frac{Overlapped\ Range}{Estimated\ Range} \tag{15}\] Here \(Overlapped\ Range\) is the overlap between the estimated range and the allowable range for a particular time step, whereas \(Estimated\ Range\) is the output estimation given by the TSRegNN for that time step. Percentage overlap robustness can also be used as a measure of **global robustness**[43] of TSRegNN. When selecting robustness properties, it is crucial to consider the specific application area. If the application allows for some flexibility in terms of performance, POR can be utilized. On the other hand, if the application requires a more conservative approach, PR should be considered. An example showing calculations for the robustness measures is shown in [A.1]. ### Monotonicity. In PHM applications, the monotonicity property refers to the system's health indicator, i.e., the degradation parameter exhibiting a consistent increase or decrease as the system approaches failure. PHM involves monitoring a system's health condition and predicting its Remaining Useful Life (RUL) to enable informed maintenance decisions and prevent unforeseen failures. For detailed mathematical modeling of the monotonicity property, please refer to [31] and the latest report on formal methods at [9]. In general, for a TSRegNN \(f:\ \mathbf{x}\in\mathbb{R}\rightarrow\mathbf{y}\in\mathbb{R}\) with single-featured input and output spaces, at any time instance \(t\), the property for monotonically decreasing output can be written as: \[\begin{split}\forall x^{\prime}\exists\delta:x\leq x^{\prime} \leq x+\delta\implies f(x^{\prime})\leq f(x)\\ \forall x^{\prime}\exists\delta:x-\delta\leq x^{\prime}\leq x \implies f(x^{\prime})\geq f(x)\end{split} \tag{16}\] This is a local monotonicity property. If this holds true for the entire time range, then the property can be considered as a global property [43]. In this paper, the monotonicity property is only valid for the PHM examples for RUL estimation. ## 5 Reachability of Specific Layers to Allow Variable-Length Time Series Input ### Reachability Of A Fully-connected Layer. We consider a fully-connected layer with the following parameters: the weights \(W_{fc}\in R^{op\times ip}\) and the bias \(b_{fc}\in R^{op\times 1}\), where \(op\) and \(ip\) are, respectively, the output and input sizes of the layer. The output of this fully connected layer w.r.t an input \(i\in R^{ip\times T_{s}}\) will be \[\begin{split} o=W_{fc}\times i+b_{fc}\\ where\ output\ o\in R^{op\times T_{s}}\end{split}\] Thus, we can see that the layer functionality does not alter the output size for a variable length of time sequence, making the functionality of this layer independent of the time series length. The reachability of a fully-connected layer will be given by the following lemma. Lemma 1: _The reachable set of a fully-connected layer with a Star input set \(I=\langle c,V,P\rangle\) is another Star \(I^{\prime}=\langle c^{\prime},V^{\prime},P^{\prime}\rangle\) where \(c^{\prime}=W_{fc}\times c+b_{fc}\), the matrix multiplication of \(c\) with Weight matrix \(W_{fc}\),\(V^{\prime}=\{v^{\prime}_{1},...,v^{\prime}_{m}\}\), where \(v^{\prime}_{i}=W_{fc}\times v_{i}\), the matrix multiplication of the weight matrix and the \(i^{th}\) basis vector, and \(P^{\prime}=P\)._ ### Reachability of a 1D Convolutional Layer. We consider a 1d convolution layer with the following parameters: the weights \(W_{conv1d}\in R^{w_{f}\times nc\times fl}\) and the bias \(b_{conv1d}\in R^{1\times fl}\), the padding size \(P\), the stride \(S\), and the dilation factor \(D\); where \(w_{f},nc\ and\ fl\) are the filter size, number of channels and number of filters respectively. The output of this 1d convolution layer w.r.t an input \(i\in R^{ip\times T_{s}}\) will be \[\begin{split} o=W^{\prime}_{conv1d}\cdot i^{\prime}+b_{conv1d} \ \ dot\ product\ along\ time\ dimension\ for\ each\ filter\\ where\ output\ o\in R^{fl\times T^{\prime}_{s}}\end{split}\] where \(T_{s}^{\prime}=T_{s}+T_{d}-T_{fl}\) is the new time series length at the output and \(T_{d},T_{fl}\) are the time lengths contributed by the dilation factor and the 1d convolution function, respectively. \(w^{\prime}_{conv1d}\) is the modified weight matrix after adding dilation, and \(i^{\prime}\) is the modified input after padding. We can see when \(T_{d}\) becomes equal to \(T_{fl}\) for any convolution layer, the layer functionality becomes independent of the length of the time series. The reachability of a 1d convolution layer will be given by the following lemma. Lemma 2: _The reachable set of a 1d convolution layer with a Star input set \(I=\langle c,V,P\rangle\) is another Star \(I^{\prime}=\langle c^{\prime},V^{\prime},P^{\prime}\rangle\) where \(c^{\prime}=W_{conv1d}\cdot c\), 1d convolution applied to the basis vector \(c\) with Weight matrix \(W_{conv1d}\),\(V^{\prime}=\{v_{1}^{\prime},...,v_{m}^{\prime}\}\), where \(v_{i}^{\prime}=W_{conv1d}\cdot v_{i}\), is the 1d convolution operation with zero bias applied to the generator vectors, i.e., only using the weights of the layer, and \(P^{\prime}=P\)._ ## 6 Robustness Verification Problem Formulation We consider the verification of the robustness and the monotonicity properties. Problem 1 (**Local Robustness Property**): Given a TSRegNN \(f\), a time series signal \(S\), and a noise \(\mathcal{A}\), prove if the network is locally robust or non-robust [Sec. 4] w.r.t the noise \(\mathcal{A}\); i.e., if the estimated bounds obtained through the reachability calculations lie within the allowable range of the actual output for the particular time instance. Problem 2 (**Global Robustness Property**): Given a TSRegNN \(f\), a set of \(N\) consecutive time-series signal \(\mathbf{S}=\{S_{1},\ldots,S_{N}\}\), and a noise \(\mathcal{A}\), compute the percentage robustness values (PR [Def. 7] and POR [Def. 8]) corresponding to \(\mathcal{A}\). Problem 3 (**Local Monotonicity Property**): Given a TSRegNN \(f\), a set of \(N\) consecutive time-series signal \(\mathbf{S}=\{S_{1},\ldots,S_{N}\}\), and a noise \(\mathcal{A}\), show that both the estimated RUL bounds of the network [Eq. 16] corresponding to noisy input \(S_{t}^{\prime}\) at any time instance \(t\) are monotonically decreasing. To get an idea of the global performance [43] of the network, local stability properties have been formulated and verified for each point in the test dataset for 100 consecutive time steps. The core step in solving these problems is to solve the local properties of a TSRegNN \(f\) w.r.t a noise \(\mathcal{A}\). It can be done using over-approximate reachability analysis, computing the 'output reachable set' \(\mathcal{R}_{t}s=Reach(f,I)\) that provides an upper and lower bound estimation corresponding to the noisy input set \(I\). In this paper, we propose using percentage values as robustness measures for verifying neural networks (NN). We conduct reachability analysis on the output set to ensure it stays within predefined safe bounds specified by permissible upper-lower bounds. The calculated overlap or sample robustness, expressed as a percentage value, represents the NN's robustness achieved through the verification process under different noise conditions. The proposed solution takes a sound and incomplete approach to verify the robustness of regression neural networks with time series data. The approach over-approximates the reachable set, ensuring that any input point within the set will always have an output point contained within the reachable output set (sound [Def. 9]). However, due to the complexities of neural networks and the over-approximation nature of the approach, certain output points within the reachable output set may not directly correspond to specific input points (incomplete [Def. 10]). Over-approximation is commonly used in safety verification and robustness analysis of complex systems due to its computational efficiency and reduced time requirements compared to exact methods. ## 7 Experimental Setup ### Dataset Description For evaluation, we have considered two different time series datasets for PHM of a Li battery and a turbine. Battery State-of-Charge Dataset (BSOC) [14]:This dataset is derived from a new 3Ah LG HG2 cell tested in an 8 cu.ft. thermal chamber using a 75amp, 5-volt Digatron Firing Circuits Universal Battery Tester with high accuracy (0.1 of full scale) for voltage and current measurements. The main focus is to determine the State of Charge (SOC) of the battery, measured as a percentage, which indicates the charge level relative to its capacity. SOC for a Li-ion battery depends on various features, including voltage, current, temperature, and average voltage and current. The data is obtained from the 'LG_HG2_Prepared_Dataset_McMasterUniversity_Jan_2020', readily available in the dataset folder [14]. The training data consists of a single sequence of experimental data collected while the battery-powered electric vehicle during a driving cycle at an external temperature of 25 degrees Celsius. The test dataset contains experimental data with an external temperature of -10 degrees Celsius. Turbofan Engine Degradation Simulation Data Set (TEDS) [29, 2]:This dataset is widely used for predicting the Remaining Useful Life (RUL) of turbofan jet engines [2]. Engine degradation simulations are conducted using C-MAPSS (Commercial Modular Aero-Propulsion System Simulation) with four different sets, simulating various operational conditions and fault modes. Each engine has 26 different feature values recorded at different time instances. To streamline computation, features with low variability (similar to Principal Component Analysis [25]) are removed to avoid negative impacts on the training process. The remaining 17 features [A.5,A.4] are then normalized using z-score (mean-standard deviation) for training. The training subset comprises time series data for 100 engines, but for this paper, we focus on data from only one engine (FD001). For evaluation, we randomly selected engine 52 from the test dataset. ### Network Description The network architecture used for training the BSOC dataset, partially adopted from [1], is a regression CNN, as shown in [Fig 9,A.7]. The network has five input features which correspond to one SOC value. Therefore, the TSRegNN for the BSOC dataset can be represented as: \[\begin{split} f:\ x\in\mathbb{R}^{5\times t_{s}}\to y\in \mathbb{R}^{1\times t_{s}}\\ \hat{SO}C_{t_{s}}=f(t_{s})\end{split} \tag{17}\] The network architecture used for training the TEDS dataset is also a regression CNN, adopted from [3] and shown in [Fig 9,A.7]. The input data is preprocessed to focus on 17 features, corresponding to one RUL value for the engine. Therefore, the TSRegNN for the TEDS dataset can be represented as: \[\begin{split} f:\ x\in\mathbb{R}^{17\times t_{s}}\to y\in \mathbb{R}^{1\times t_{s}}\\ R\hat{U}L_{t_{s}+1}=f(t_{s})\end{split} \tag{18}\] The output's \(t_{s}^{th}\) value represents the desired estimation of SOC or RUL, with the given series of past \(t_{s}\) values for each feature variable. ## 8 Experimental Results and Evaluation The actual experimental results shown in this paper are conducted in a Windows-10 computer with the 64-bit operating system, Intel(R) Core(TM) i7-8850H processor, and 16 GB RAM. For all four noise scenarios [Sec. 3], local and global (for 100 consecutive time steps) robustness properties are considered for both datasets. The local monotonicity property is only considered for the turbine RUL estimation example. Battery State-of-Charge Dataset (BSOC):In this dataset, the output value (SOC) is supposed to be any value between 0 and 1 (or 0 and 100%). But, for the instances where the lower bound is negative, we instead treat it as 0 because a negative SOC does not provide any meaningful implications. For SFSI, for a random (here feature 3) input feature-signal, the noise is added only at the last time step (\(t_{30}\)) of the 3rd feature, whereas for SFAI, noise is added throughout all the time instances of the input signal. The effect of four different noise values, 1%, 2.5%, 5% and 10% of the mean(\(\mu\)), are then evaluated using over-approximate star reachability analysis [Sec. 2.3] on 100 consecutive input signal, each with 30 time instances. We considered \(\pm 5\%\) around the actual SOC value as the allowable bounds. For all the noises, 2 different robustness values, PR [Def. 7] and POR [(Def. 8] are then calculated, and comparative tables are shown below in Table 1. **Observation and Analysis:** Fig. 2 shows a sample plot for gradually increasing estimation bounds with increasing MFSI noise. We can see from the figure that for each time instance, the system becomes locally non-robust as the noise value increases. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline \(noise\) & \(PR_{SFSI}\) & \(POR_{SFSI}\) & \(avgRT_{SFSI}(s)\) & \(PR_{SFAI}\) & \(POR_{SFAI}\) & \(avgRT_{SFAI}(s)\) \\ \hline 1 & 100 & 100 & 0.7080 & 100 & 100 & 20.9268 \\ 2.5 & 100 & 100 & 0.7080 & 100 & 100 & 20.9991 \\ 5 & 100 & 100 & 0.7116 & 100 & 100 & 21.0729 \\ 10 & 100 & 100 & 0.7027 & 100 & 100 & 21.0780 \\ \hline \hline \(noise\) & \(PR_{MFSI}\) & \(POR_{MFSI}\) & \(avgRT_{MFSI}(s)\) & \(PR_{MFAI}\) & \(POR_{MFAI}\) & \(avgRT_{MFAI}(s)\) \\ \hline 1 & 100 & 100 & 0.7653 & 100 & 100 & 36.1723 \\ 2.5 & 0 & 73.87 & 0.8251 & 0 & 73.87 & 59.0588 \\ 5 & 0 & 35.95 & 0.9026 & 0 & 35.95 & 91.6481 \\ 10 & 0 & 17.89 & 1.1051 & 0 & 17.89 & 163.7568 \\ \hline \hline \end{tabular} \end{table} Table 1: Global Robustness: Percentage Robustness(PR) for noises for 100 consecutive time steps Figure 3: Percentage Robustness and Runtime plots w.r.t increasing noise Figure 2: Allowable (blue) and reachable (red) bounds for battery SOC dataset for 100 consecutive time steps and 2 different SFAI noise values 1% (upper), and 2.5% (lower) respectively Table. 1 presents the network's overall performance, i.e., the percentage robustness measures, PR [Def. 7], POR [Def. 8] and average verification runtime (avgRT), with respect to each noise. The percentage robustness values start decreasing and the average (as well as total) runtime starts increasing as the measure of noise increases for MFAI and MFSI, but for SFSI and SFAI it remains the same for these noise perturbations considered. This is because in the first case, the noise is added to all the features, resulting in increasing the cumulative effect of disturbance on the output estimation. However, in the other case, the noise is attached only to a single feature, assuming that not all features will get polluted by noise simultaneously; and that the reachable bounds are in the acceptable range. A plot of robustness values and the total runtime is shown in Fig 3. We can also see that the decrease in POR values for MFSI and MFAI are less compared to the PR values with increasing noise because, for PR calculation, only those time steps are considered where the estimated range falls entirely within the allowed range, whereas for POR calculation even if some part of the estimated range goes outside the allowable range, their fractional contribution is still considered. Another interesting observation here is the robustness matrices for both SFSI and SFAI are the same; however, the computations for SFAI take almost three times longer than the computations for SFSI. The same analogy is observed for MFSI and MFAI datasets but with an even higher time taken for MFAI. The possible reason for this observation could be that, while the data is subjected to perturbations across all time instances, the noise added to the final time step has the most significant impact on the output. Turbofan Engine Degradation Simulation Data Set (TEDS):In this dataset, the acceptable RUL bounds are considered to be \(\pm 10\) of the actual RUL. For instances where the lower bound is negative, we assume those values to be 0 as well. We then calculate the percentage robustness measures, PR [Def.7], POR [Def.8], and average verification runtime (avgRT), for an input set with all 100 consecutive data points, each having 30 time instances. The results for three different noise values, 0.1%, 0.5%, and 1% of the mean (\(\mu\)), are presented in Table 2. For SFSI and SFAI noises, we randomly choose a feature (feature 7, representing sensor 2) for noise addition. The noise is added to the last time step (\(t_{30}\)) of each data sample for SFSI and SFAI noises. The results of the MFAI noise have been omitted due to scalability issues, as it is computationally heavy and time-consuming 3. Footnote 3: The MFAI noise, i.e., adding the \(L_{\infty}\) norm to all feature values across all time instances, significantly increases the input-set size compared to other noise types. This leads to computationally expensive calculations for layer-wise reachability, resulting in longer run times. Moreover, noise in an industrial setting affecting all features over an extended period is unlikely. Considering these factors, we decided to exclude the results of the MFAI noise for the TEDS dataset from our analysis. For verifying the local monotonicity of the estimated output RUL bounds at a particular time instance, we have fitted the previous RUL bounds along with the estimated one in a linear equation as shown in Fig. 4. This guarantees the monotonically decreasing nature of the estimated RUL at any time-instance. to each noise. Contrary to the other dataset, we see that the percentage robustness measures corresponding to SFAI and SFSI noises differ. Interestingly, while the noise value increases, the PR, and POR for SFSI remain the same, whereas the robustness measures for SFAI decrease. However, the performance matrices for MFSI are the same as the SFSI except for the time. This might be because, for both SFSI and MFSI, the noise is added only at a single time instance, whereas for SFAI, the noise is added to the entire time instances, resulting in an increased cumulative effect of disturbance on the output. Our results consistently show higher POR values than PR values in Table. [1-2]. Since we assess output reachable bounds using \(L_{\infty}\) perturbations in the input, we acknowledge the significance of cases where reachable sets overlap with permissible bounds but do not entirely fall within them. In summary, PR measures adopt a more conservative approach, while POR captures the relationship between output reachable bounds and permissible bounds more accurately. ## 9 Conclusion and Future Work This paper explores formal method-based reachability analysis of variable-length time series regression neural networks (NNs) using approximate Star methods in the context of predictive maintenance, which is crucial with the rise of Industry 4.0 and the Internet of Things. The analysis considers sensor noise introduced in the data. Evaluation is conducted on two datasets, employing a unified reachability analysis that handles varying features and variable time sequence lengths while analyzing the output with acceptable upper and lower bounds. Robustness and monotonicity properties are verified for the TEDS dataset. Real-world datasets are used, but further research is needed to establish stronger connections between practical industrial problems and performance metrics. The study opens new avenues for exploring perturbation contributions to the output and extending reachability analysis to 3-dimensional time series data like videos. Future work involves verifying global monotonicity properties as well, and including more predictive maintenance and anomaly detection applications as case studies. The study focuses solely on offline data analysis and lacks considerations for real-time stream processing and memory constraints, which present fascinating avenues for future research. Acknowledgements.The material presented in this paper is based upon work supported by the National Science Foundation (NSF) through grant numbers 1910017, 2028001, 2220418, 2220426, and 2220401, and the Defense Advanced Research Projects Agency (DARPA) under contract number FA8750-18-C-0089 and FA8750-23-C-0518, and the Air Force Office of Scientific Research (AFOSR) under contract number FA9550-22-1-0019 and FA9550-23-1-0135. Any opinions, findings, conclusions, or recommendations expressed in this paper are those of the authors and do not necessarily reflect the views of AFOSR, DARPA, or NSF. We also want to thank our colleagues, Tianshu and Barnie for their valuable feedback.
2303.13228
Enriching Neural Network Training Dataset to Improve Worst-Case Performance Guarantees
Machine learning algorithms, especially Neural Networks (NNs), are a valuable tool used to approximate non-linear relationships, like the AC-Optimal Power Flow (AC-OPF), with considerable accuracy -- and achieving a speedup of several orders of magnitude when deployed for use. Often in power systems literature, the NNs are trained with a fixed dataset generated prior to the training process. In this paper, we show that adapting the NN training dataset during training can improve the NN performance and substantially reduce its worst-case violations. This paper proposes an algorithm that identifies and enriches the training dataset with critical datapoints that reduce the worst-case violations and deliver a neural network with improved worst-case performance guarantees. We demonstrate the performance of our algorithm in four test power systems, ranging from 39-buses to 162-buses.
Rahul Nellikkath, Spyros Chatzivasileiadis
2023-03-23T12:59:37Z
http://arxiv.org/abs/2303.13228v1
# Enriching Neural Network Training Dataset to Improve Worst-Case Performance Guarantees ###### Abstract Machine learning algorithms, especially Neural Networks (NNs), are a valuable tool used to approximate non-linear relationships, like the AC-Optimal Power Flow (AC-OPF), with considerable accuracy - and achieving a speedup of several orders of magnitude when deployed for use. Often in power systems literature, the NNs are trained with a fixed dataset generated prior to the training process. In this paper, we show that adapting the NN training dataset _during training_ can improve the NN performance and substantially reduce its worst-case violations. This paper proposes an algorithm that identifies and enriches the training dataset with critical datapoints that reduce the worst-case violations and deliver a neural network with improved worst-case performance guarantees. We demonstrate the performance of our algorithm in four test power systems, ranging from 39-buses to 162-buses. AC-OPF, Worst-Case Guarantees, Trustworthy Machine Learning, Explainable AI. ## I Introduction Machine learning algorithms, especially Neural Networks (NNs), are a valuable tool widely used to approximate non-linear relationships with considerable accuracy. An adequately sized NN with sufficient training data can be used to estimate complex non-nonlinear and non-convex power flow problems like the AC-Optimal Power Flow (AC-OPF) [1] in a fraction of the time it would take to solve the exact problem. Considering AC-OPF is a valuable tool that the power system operators have to use numerous times to evaluate multiple uncertain scenarios and to prepare contingency plans for the daily safe operation of the power system, a well-trained NN would be a valuable tool that can save computational time and help the power system operators run orders of magnitude more scenarios. Moreover, these well-trained NNs could be used to either replace the existing convex approximations or relaxations of these non-convex non-linear problems [2, 3], to warm start the actual problem [4], or as a surrogate function in challenging optimization algorithms [5]. However, for the widespread adoption of the NN algorithms in power systems, it is essential to build an accurate approximation of the non-linear problems that do not violate the constraints of the power system. In the case of approximating AC-OPF algorithms, a few researchers have shown that incorporating power flow constraint violations into the training process [6, 7, 8, 9] can improve the accuracy of the predictions drastically. Previously, we had proposed a Physics-Informed Neural Network (PINN) [10], which combined the KKT conditions of AC-OPF along with the power flow constraints to improve the performance of the NN [11]. Moreover, exciting research is happening on how the NN predictions could be altered to satisfy the power system constraints [12]. These works have helped improve the accuracy and build trust in the NN predictions for AC-OPF predictions. Regardless, the performance of all these NN algorithms depends highly on the ability of the training data set to capture adverse points to the NN. Yet, a traditional NN training algorithm usually relies on a fixed training dataset, either randomly generated from the input domain or collected from a previously existing dataset, throughout the training process to generalize the problem. In a sizable system with multiple parameters to consider, one might require more than this traditional way of compiling training datasets to capture challenging adverse inputs to the NN that could lead to massive power system constraint violations once deployed. One of our previous works has shown the importance of enriching the training dataset with challenging input data points during training to improve its accuracy when predicting the N-1 small-signal stability margin of a wind farm [13]. Ultimately, to build trust in the NN's performance for a safety-critical application, such as power systems operation, some form of worst-case performance guarantees is required. Our previous work has shown how we can determine worst-case guarantees for AC [11] and DC-OPF [14] problems; and demonstrated how one could use the worst-case guarantees to select appropriate hyperparameters [11, 15] for the NN training which improve the worst-case performance. Still, as we show in this paper, we can substantially further improve the NN worst-case performance by enriching the training dataset. In this paper, we illustrate how we can use the worst-case guarantees of the NN for AC-OPF to enrich the NN training data set with critical data points that improve the worst-case performance of the NN. Our contributions are as follows: 1. We demonstrate how to use the worst-case guarantees of NN predictions, proposed in [11], to enrich the NN training dataset during training to minimize the worst-case generation constraint violations of the NN. 2. We demonstrate how we can use a simplified MILP problem to identify the region around the point that caused worst-case constraint violations to sample effectively multiple data points to the training dataset. This paper is structured as follows: Section II discusses AC-OPF and how we can use a power flow-informed NN to approximate the AC-OPF solutions. Section III explains the proposed NN dataset enrichment algorithm.Section IV presents results from case studies, and Section V concludes. ## II Optimal Power Flow Algorithm and Power Flow Informed Neural Network This section describes the AC -OPF problem we used as a guiding application, presents the standard neural network architecture, and the power-flow-informed NN that will be used as a building block for the proposed NN training dataset enrichment algorithm. ### _AC-Optimal Power Flow_ The objective function for reducing the cost of active power generation in a power system with \(N_{g}\) number of generators, \(N_{b}\) number of buses, and \(N_{d}\) number of loads can be formulated as follows: \[\min_{\mathbf{P}_{g},\mathbf{Q}_{g},\mathbf{v}}\quad\mathbf{c}_{p}^{T}\mathbf{ P}_{g} \tag{1}\] where the vector \(\mathbf{P}_{g}\) denotes the active power setpoints of the generators in the system, and \(\mathbf{c}_{p}^{T}\) denotes the costs vector for the active power production at each generator. \(\mathbf{Q}_{g}\) and \(\mathbf{v}\) denote the reactive power setpoints of the generators and the complex bus voltages, respectively. The active and reactive power injection at each node \(n\in N_{b}\), denoted by \(p_{n}\) and \(q_{n}\) respectively, can be calculated as follows: \[p_{n} =p_{n}^{g}-p_{n}^{d} \forall n\in N_{b}, \tag{2}\] \[q_{n} =q_{n}^{g}-q_{n}^{d} \forall n\in N_{b}, \tag{3}\] where \(p_{n}^{g}\) and \(q_{n}^{g}\) are the active and reactive power generation, and \(p_{n}^{d}\) and \(q_{n}^{d}\) are the active reactive power demand at node \(n\). The power flow equations in the network in cartesian coordinates can be written as follows: \[p_{n} =\sum_{k=1}^{N_{b}}v_{n}^{r}(v_{k}^{r}G_{nk}-v_{k}^{i}B_{nk})+v_{n }^{i}(v_{k}^{i}G_{nk}+v_{k}^{r}B_{nk}) \tag{4}\] \[q_{n} =\sum_{k=1}^{N_{b}}v_{n}^{i}(v_{k}^{r}G_{nk}-v_{k}^{i}B_{nk})-v_{n }^{r}(v_{k}^{i}G_{nk}+v_{k}^{r}B_{nk}) \tag{5}\] where \(v_{n}^{r}\) and \(v_{n}^{i}\) denote the real and imaginary part of the voltage at node \(n\). The conductance and susceptance of the line \(nk\) connecting node \(n\) and \(k\) is denoted by \(G_{nk}\) and \(B_{nk}\) respectively. The power flow equations can be written in a more compact form by combining the real and imaginary parts of voltage into a vector of size \(2N_{b}\times 1\) as \(\mathbf{v}=[(\mathbf{v}^{r})^{T},(\mathbf{v}^{i})^{T}]^{T}\) as follows [6]: \[\mathbf{v}^{T}\mathbf{M}_{p}^{n}\mathbf{v}= p_{n} \forall n\in N_{b} \tag{6}\] \[\mathbf{v}^{T}\mathbf{M}_{q}^{n}\mathbf{v}= q_{n} \forall n\in N_{b} \tag{7}\] where \(\mathbf{M}_{p}^{n}\) and \(\mathbf{M}_{q}^{n}\) are symmetric real valued matrices [16]. The active and reactive power generation limits can be formulated as follows: \[p_{n}^{g}\leq p_{n}^{g}\leq\overline{p}_{n}^{g} \forall n\in N_{g} \tag{8}\] \[q_{n}^{g}\leq q_{n}^{g}\leq\overline{q}_{n}^{g} \forall n\in N_{g} \tag{9}\] Similarly, the voltage and line current flow constraints for the power system can be represented as follows: \[\mathbf{V}^{n}\leq\mathbf{v}^{T}\mathbf{M}_{v}^{n}\mathbf{v}\leq \overline{\mathbf{V}}^{n} \forall n\in N_{b} \tag{10}\] \[\ell_{mn} =\mathbf{v}^{T}\mathbf{M}_{i_{mn}}\mathbf{v}\leq\overline{\ell}_{ mn} \forall mn\in N_{l} \tag{11}\] where \(\mathbf{M}_{v}^{n}:=e_{n}e_{n}^{T}+e_{N_{b}+n}e_{N_{b}+n}^{T}\) and \(e_{n}\) is a \(2N_{b}\times 1\) unit vector with zeros at all the locations except \(n\). The squared magnitude of upper and lower voltage limit is denoted by \(\overline{\mathbf{V}}^{n}\) and \(\underline{\mathbf{V}}^{n}\) respectively. The squared magnitude of line current flow in line \(mn\) is represented by \(\ell_{mn}\) and matrix \(\mathbf{M}_{i}^{mn}=|y_{mn}|^{2}(e_{m}-e_{n})(e_{m}-e_{n})^{T}+|y_{mn}|^{2}(e_ {N_{b}+m}-e_{N_{b}+n})(e_{N_{b}+m}-e_{N_{b}+n})^{T}\), where \(y_{mn}\) is the line admittance of branch \(mn\). Assuming the slack bus \(N_{sb}\) acts as an angle reference for the voltage, we will have: \[v_{N_{sb}}^{i}=\mathbf{v}^{T}\mathbf{e}_{N_{b}+N_{sb}}\mathbf{e}_{N_{b}+N_{sb} }^{T}\mathbf{v}=0 \tag{12}\] The constraints (2)-(3),(6)-(12) and the objective function for the AC-OPF problem (1) can be written in a more compact form as follows (for more details, see [6]): \[\min_{\mathbf{v},\mathbf{G}}\quad\mathbf{c}^{T}\mathbf{G} \tag{13a}\] \[s.t\ \mathbf{v}^{T}\mathbf{L}_{l}\mathbf{v} =a_{l}^{T}\mathbf{G}+b_{l}^{T}\mathbf{D}, l=1:L\] (13b) \[\mathbf{v}^{T}\mathbf{M}_{m}\mathbf{v} \leq d_{m}^{T}\mathbf{D}+f_{m}, m=1:M \tag{13c}\] where \(\mathbf{G}=[\mathbf{P}_{g}^{T},\mathbf{Q}_{g}^{T}]^{T}\), \(\mathbf{c}^{T}\) is the combined linear cost terms for the active power and, if necessary, reactive power generation. \(\mathbf{D}=[\mathbf{P}_{d}^{T},\mathbf{Q}_{d}^{T}]^{T}\) denotes the active and reactive power demand in the system. Following that, the equality constraints (6)-(7) and (12) can be represented by the \(\mathbf{L}=2N_{b}+1\) constraints in (13b). Similarly, the inequality constraints (8)-(11) can be represented by the \(\mathbf{M}=4N_{g}+2N_{b}+N_{l}\) constraints in (13c). ### _Neural Network for Optimal Power Flow Predictions_ Neural Networks (NNs) are considered global approximators and can estimate the AC-OPF solution with significant accuracy when trained appropriately, and with sufficient training data. To achieve this, NN uses a group of interconnected hidden layers with multiple neurons to learn the relationship between the input and output layers. In the case of AC-OPF for generation cost minimization, the input layer will be the active and reactive power demand in the power system, and the output layer will be the optimal active and reactive power generation setpoints. Each neuron in a hidden layer will be connected with neurons in the neighboring layers through a set of edges. The information exiting one neuron goes through a linear transformation before reaching the neuron in the subsequent layer. Activation functions are used in every neuron to introduce nonlinear relationships into the approximator. In this paper, we use the ReLU activation function in the hidden layers, as they have shown to accelare convergence during training [17]. A standard NN, with \(K\) number of hidden layers and \(N_{k}\) number of neurons in hidden layer \(k\), is shown in Fig. 1. The information arriving at layer \(k\) can be formulated as follows: \[\hat{\mathbf{Z}}_{k}=\mathbf{w}_{k}\mathbf{Z}_{k-1}+\mathbf{b}_{k} \tag{14}\] where \(\mathbf{Z}_{k-1}\) is the output of the neurons in layer \(k-1\), \(\hat{\mathbf{Z}}_{k}\) is the information received at layer \(k\), \(\mathbf{w_{k}}\) and \(\mathbf{b_{k}}\) are the weights and biases connecting layer \(k-1\) and \(k\). As mentioned, each neuron in the neural network uses the nonlinear ReLU activation function to accurately approximate the nonlinear relationships between the input and output layer. So, the output of each hidden layer in the NN can be represented as follows: \[\mathbf{Z}_{k}=\max(\hat{\mathbf{Z}}_{k},0) \tag{15}\] The weights \(\mathbf{w_{k}}\) and biases \(\mathbf{b_{k}}\) in the NN are optimized to minimize the average error in predicting the optimal generation setpoints in the training data set, denoted by \(\mathcal{L}_{0}\), which will be measured as follows: \[\mathcal{L}_{0}=\frac{1}{N}\sum_{i=1}^{N}|\mathbf{G}_{i}-\hat{\mathbf{G}}_{i}| \tag{16}\] where \(N\) is the number of data points in the training set, \(\mathbf{G}_{i}\) is the generation setpoint determined by the OPF, and \(\hat{\mathbf{G}}_{i}\) is the predicted NN generation setpoint. The back-propagation algorithm will be used to modify the weights and biases to minimize the average prediction error (\(\mathcal{L}_{0}\)), as shown in Fig. 1, during each iteration of the NN training. A learning rate of \(\lambda\) controls the step size of the optimizer. ### _Power-Flow-Informed Neural Network_ A standard NN training procedure relies solely on the average error between its predictions and the data in the training dataset to achieve adequately accurate AC-OPF predictions. However, since these NN predictions should satisfy the power flow constraints given in Eq. (13), we can use a power-flow-informed NN (PFNN) similar to the ones proposed in [6], and our previous work [11] to improve the generalization capability of the NN. The structure of the power-flow-informed NN (PFNN) used in this work is given in Fig. 2. Here an additional NN, denoted by \(NN_{v}\), is used to approximate the real and imaginary part of the voltages in the system. Subsequently, both the predicted optimal generation setpoints and voltages in the system were analyzed to compute the power flow constraint violation, denoted by \(\mathcal{L}_{PF}\), as follows: \[\mathcal{L}_{PF}=\frac{1}{N}\sum_{i=1}^{N}|\sigma_{eq}+\sigma_{incq}|\] (17a) where, \[\sigma_{eq} =|\mathbf{v}^{T}\mathbf{L}_{l}\mathbf{v}-a_{l}^{T}\mathbf{G}-b_{l }^{T}\mathbf{D}|, l=1:L \tag{17b}\] \[\sigma_{incq} =\max(\mathbf{v}^{T}\mathbf{M}_{m}\mathbf{v}-d_{m}^{T}\mathbf{D} -f_{m},0), m=1:M \tag{17c}\] Thus the loss function for training the PFNN is formulated as follows: \[\mathcal{L}_{PFNN}=\Lambda_{0}\mathcal{L}_{0}+\Lambda_{PF}\mathcal{L}_{PF} \tag{18}\] where \(\Lambda_{0}\) and \(\Lambda_{PF}\) are the weights given to the two loss functions \(\mathcal{L}_{0}\) and \(\mathcal{L}_{PF}\), respectively. In our investigations, this configuration of NN with power flow constraint check has been shown to improve the average and worst-case performance of the NN without introducing a substantial computational burden. Still, the performance of the trained PFNN depends highly on the ability of the training dataset to capture adverse points to the PFNN. ## III Neural Network Data Enriching Using Worst-Case Constraint Violations The following section describes a NN dataset-enriching algorithm that uses worst-case generation constraint violations in regular intervals during training to identify the input regions causing the largest generation constraint violations and then enrich the training dataset with new training points from those regions. This shall help minimize the probability a NN prediction to lead to significant generation constraint violations. Fig. 1: Illustration of the NN training architecture to predict the optimal active and reactive power generation setpoints (\(\hat{\mathbf{G}}\)) using the active and reactive power demand (\(\mathbf{D}\)) in the system as input: There are K hidden layers with \(N_{k}\) neurons each. During training the weights \(\mathbf{w_{k}}\) and biases \(\mathbf{b_{k}}\) in the NN are optimized to minimize the average error (\(\mathcal{L}_{0}\)) in predicting the optimal generation setpoints (\(\mathbf{G}\)), calculated using Eq. (16).A learning rate of \(\alpha\) controls the step size of the optimizer. Fig. 2: Illustration of the power-flow-informed NN (PFNN) training architecture to predict the optimal active and reactive power generation setpoints (\(\hat{\mathbf{G}}\)) using the active and reactive power demand (\(\mathbf{D}\)) in the system as input. There are two NNs, one for predicting \(\hat{\mathbf{G}}\), denoted by \(NN_{G}\), and the other for predicting the voltage in the system, denoted by \(NN_{v}\). During training, the weights \(\mathbf{w_{k}}\) and biases \(\mathbf{b_{k}}\) in the NN are optimized to minimize the average error (\(\mathcal{L}_{0}\)) in predicting the optimal generation (\(\mathbf{G}\)) and voltage setpoints (\(\mathbf{v}\)), and the power flow constraint violation(\(\mathcal{L}_{PF}\)). See (17) for the calculations. ### _Worst-Case Violations of the Neural Networks to Identify Adverse Examples_ As mentioned earlier, we determine the input data with which we enrich our NN training dataset by identifying which are the NN inputs that cause the worst violations. The optimization problem to identify worst-case generation constraint violation is presented in (19) [11]: \[\max_{\mathbf{D}} v_{g}\] (19a) \[v_{g}=\max_{\mathbf{D}} (\mathbf{\hat{G}}-\mathbf{\overline{G}},\mathbf{\underline{G}}- \mathbf{\hat{G}},0)\] (19b) s.t. ( 14 ) ( 15 ) where \(\mathbf{\overline{G}}\) and \(\mathbf{\underline{G}}\) are the maximum and minimum active and reactive power generation bounds. However, the formulation for the ReLU activation function in (15) is nonlinear. So, the ReLU activation function is reformulated into a set of mixed-integer linear inequality constraints, as follows, to simplify the optimization problem [14]: \[\mathbf{Z}_{k}=\max(\mathbf{\hat{Z}}_{k},0)\Rightarrow\left\{\begin{aligned} \mathbf{Z}_{k}&\leq\mathbf{\hat{Z}}_{k}-\mathbf{Z}_{k}(1- \mathbf{y}_{k})\end{aligned}\right. \tag{20a}\] \[\mathbf{Z}_{k}\geq\mathbf{\hat{Z}}_{k}\] (20b) \[\mathbf{Z}_{k}\leq\mathbf{\overline{Z}}_{k}\mathbf{y}_{k}\] (20c) \[\mathbf{Z}_{k}\geq\mathbf{0}\] (20d) \[\mathbf{y}_{k}\in\{0,1\}^{N_{k}}. \tag{20e}\] where \(\mathbf{Z}_{k}\) and \(\mathbf{\hat{Z}}_{k}\) are the outputs and inputs of the ReLU activation function, \(y_{k}^{i}\) is a binary variable, and \(\mathbf{\underline{Z}}_{k}\) and \(\mathbf{\overline{Z}}_{k}\) are the upper and lower limits of the input to ReLU. These limits should be sufficiently large to ensure they are not binding and small enough so the constraints are not unbounded. We used interval arithmetic to ensure a tighter bound (ref [18] and [14] for more information). If \(\mathbf{\hat{Z}}_{k}\) is less than zero, then \(y_{k}^{i}\) will be zero because of (20c), and \(Z_{k}^{i}\) will be constrained to zero. Else, \(y_{k}^{i}\) will be equal to one, and \(Z_{k}^{i}\) will be equal to \(\mathbf{\hat{Z}}_{k}\) due to (20a) and (20b). The MILP optimization problem for identifying the specific input combination that could lead to a large generation constraint violation (will be denoted by \(D_{WC}\) from now on) can be formulated as follows: \[\max_{\mathbf{D}} v_{g}\Rightarrow v_{g}^{max}\] (21a) \[v_{g}=\max(\mathbf{\hat{G}}-\mathbf{\overline{G}},\mathbf{ \underline{G}}-\mathbf{\hat{G}},0)\] (21b) s.t. ( 14 ) ( 20 ) where \(v_{g}^{max}\) is the worst-case generation constraint violation obtained after solving the optimization. By solving the optimization problem mentioned above for all the generators in the system, we can identify the specific \(D_{WC}\) that caused the largest constraint violations. However, simply adding these \(D_{WC}\) to the dataset might not be adequate; instead, to properly enrich the dataset, it is also essential to identify the size of a region around \(D_{WC}\) that could lead to equally large constraint violations. Thus, we can collect multiple points from this region to enrich the dataset. The optimization problem below is used to fit a hypercube to the space around \(D_{WC}\) that could cause significant constraint violations. \[\max_{\mathbf{D}} d \tag{22a}\] \[d=\left\|D_{0}-D_{WC}\right\|_{\infty}\] (22b) s.t. \[v_{g}\geq\alpha\cdot v_{g}^{max}\] (22c) \[\eqref{eq:DWC} \tag{22d}\] Here, the distance \(d\) defines a hypercube around \(D_{WC}\) that causes constraint violations more than \(\alpha\cdot v_{g}\). We can choose a smaller or larger value of \(\alpha\) to get a wider or tighter region around \(D_{WC}\). In the optimization problem (22), for \(\alpha\) close to 1, we can assume the hypercube around \(D_{WC}\) will be small. Thus we can consider a significant positive or negative \(\mathbf{\hat{Z}}\) value will not change signs for any of the input data points inside the hypercube. Therefore, we can set those ReLU statuses, denoted by \(\mathbf{y}_{k}\) in (20), to always be active or inactive in the hypercube. This reduces the number of binary variables in the optimization problem and thereby reduces the computational complexity of the optimization problem. A threshold of 10 % with respect to the \(\mathbf{\overline{Z}}\) or \(\mathbf{\underline{Z}}\) was used to decide whether to fix \(\mathbf{y}_{k}\) or not. That is, if the \(\mathbf{\hat{Z}}\) value for NN input \(D_{WC}\) was more than \(0.1\cdot\mathbf{\overline{Z}}\) or less than of \(0.1\cdot\mathbf{\underline{Z}}\), then those binary variables are set to active or inactive respectively in the hypercube. After obtaining the size of the hypercube, a random gaussian sampling, centered at \(D_{WC}\), is used to sample new training data points from the hypercube. Unlike the original training dataset, we did not compute the optimal generation setpoints (i.e., G) for these additional data points to avoid using extra computational power to solve the AC-OPF problem. Instead, we used the power flow constraint violations in \(\mathcal{L}_{PF}\) to train the neural network. The proposed NN dataset enrichment algorithm is given in Algorithm 1. ``` [MISSING_PAGE_POST] dataset, 20% were assigned to the validation set, and the remaining 30% were assigned to the unseen test dataset. The AC-OPF solver in MATPOWER [21] was used to obtain the optimal generation setpoints and the voltages at each bus. A NN with three hidden layers and 20 nodes in each layer is used to predict the AC-OPF solutions. The ML algorithms were implemented using PyTorch [22] and the Adam optimizer [23]; a learning rate of 0.001 was used for training. WandB [24] was used for monitoring and tuning the hyperparameters. The MILP problem for obtaining the worst-case violations and identifying the region was programmed in Pyomo [25] and solved using the Gurobi solver [26]. The NNs were trained in a High-Performance Computing (HPC) server with an Intel Xeon E5-2650v4 processor and 256 GB RAM. The code and datasets to reproduce the results are available online [27]. ### _Comparing the Average and the Worst-Case Performance_ Both the PFNN and WC-PFNN algorithms, in all cases, achieved convergence in 600 iterations. After every 200 iterations during training, we evaluated the worst-case constraint violations, and in the case of WC-PFNN, a new 1000 data points were added to the training dataset. To compensate, 2000 random data points were added to the PFNN training dataset before starting the training process to have a comparable training dataset. For WC-PFNN and PFNN, we selected the weight \(\Lambda_{0}\) and \(\Lambda_{PF}\) for the loss functions \(L_{0}\) and \(L_{PF}\) that offered the lowest worst-case generation constraint violation using WandB. While comparing the worst-case performance over the training process, we can see in Fig. 3 that in all cases, the proposed NN dataset enriching algorithms (WC-PFNN) result in a steep reduction in worst-case guarantees. At the same time, the PFNN was only able to achieve a slight decline or, in case162 and case 57, even an increase in worst-case generation constraint violation during training. This highlights the importance of monitoring and enriching the NN dataset using worst-case constraint violations. The mean absolute error (MAE) in an unseen test dataset, the worst-case generation constraint violation (\(v_{g}\)), and the width of the hypercube around \(D_{WC}\), denoted by \(d\) (as a fraction of the nominal loading at each node), caused by PFNN and WC-PFNN after training are given in Table II. From Table II, we see that WC-PFNN can reduce worst-case generation constraint violations by more than 30% in all test systems. As a matter of fact, in case57, it achieves an up to 80% reduction compared to PFNN with a fixed dataset. Moreover, it was observed that the hypercube around \(D_{WC}\), which collects points that could cause significant constraint violations (see (22)), was also shrinking as we were including new points in the training set. So, the proposed method also made the NN safer in more regions in the input domain. Fig. 3: Change in worst case generation constraint violation with respect to the worst-case generation constraint violation at training iteration 200 ## V Conclusion The objective of this paper is to demonstrate the importance of enriching the NN training dataset with critical datapoints in order to improve the NN worst-case performance. To determine these datapoints we use algorithms that accurately quantify the worst-case violations of the NN. This helps significantly improve the NN performance and enhances the trust in NN for safety-critical applications. In this paper, we show that by enriching the NN training dataset using the worst-case generation constraint violations of an AC-OPF problem during training, we can improve the worst-case performance of the NN drastically. We test the proposed algorithm on four different power systems, ranging from 39-buses to 162-buses and we show that we can achieve up to 80% reduction in worst-case performance. In future work, we plan to integrate this approach with our recent work on designing a neural network training procedure that achieves best average performance and minimizes the worst case violations at the same time [28].
2305.19306
A Graph is Worth 1-bit Spikes: When Graph Contrastive Learning Meets Spiking Neural Networks
While contrastive self-supervised learning has become the de-facto learning paradigm for graph neural networks, the pursuit of higher task accuracy requires a larger hidden dimensionality to learn informative and discriminative full-precision representations, raising concerns about computation, memory footprint, and energy consumption burden (largely overlooked) for real-world applications. This work explores a promising direction for graph contrastive learning (GCL) with spiking neural networks (SNNs), which leverage sparse and binary characteristics to learn more biologically plausible and compact representations. We propose SpikeGCL, a novel GCL framework to learn binarized 1-bit representations for graphs, making balanced trade-offs between efficiency and performance. We provide theoretical guarantees to demonstrate that SpikeGCL has comparable expressiveness with its full-precision counterparts. Experimental results demonstrate that, with nearly 32x representation storage compression, SpikeGCL is either comparable to or outperforms many fancy state-of-the-art supervised and self-supervised methods across several graph benchmarks.
Jintang Li, Huizhe Zhang, Ruofan Wu, Zulun Zhu, Baokun Wang, Changhua Meng, Zibin Zheng, Liang Chen
2023-05-30T16:03:11Z
http://arxiv.org/abs/2305.19306v2
# A Graph is Worth 1-bit Spikes: When Graph Contrastive Learning Meets Spiking Neural Networks ###### Abstract While contrastive self-supervised learning has become the de-facto learning paradigm for graph neural networks, the pursuit of high task accuracy requires a large hidden dimensionality to learn informative and discriminative full-precision representations, raising concerns about computation, memory footprint, and energy consumption burden (largely overlooked) for real-world applications. This paper explores a promising direction for graph contrastive learning (GCL) with spiking neural networks (SNNs), which leverage sparse and binary characteristics of SNNs to learn more biologically plausible and compact representations. We propose SpikeGCL, a novel GCL framework to learn binarized 1-bit representations for graphs, making balanced trade-offs between efficiency and performance. We provide theoretical guarantees to demonstrate that SpikeGCL has comparable expressiveness with its full-precision counterparts. Experimental results demonstrate that, with nearly 32x representation storage compression, SpikeGCL is either comparable to or outperforms many fancy state-of-the-art supervised and self-supervised methods across several graph benchmarks. ## 1 Introduction Graph neural networks (GNNs) have demonstrated remarkable capabilities in learning representations of graphs and manifolds that are beneficial for a wide range of tasks. Especially since the advent of recent self-supervised learning techniques, rapid progress toward learning universally useful representations has been made [23]. Graph contrastive learning (GCL), which aims to learn generalizable and transferable graph representations by contrasting positive and negative sample pairs taken from different graph views, has become the hotspot in graph self-supervised learning. As an active area of research, numerous variants of GCL methods have been proposed to achieve state-of-the-art performances in graph-based learning tasks, solving the dilemma of learning useful representations from graph data without end-to-end supervision [20; 37; 36; 44]. In recent years, real-world graph data are scaling and even benchmark datasets are getting larger. For example, Amazon's product recommendation graph owns over 150M users and 350M products [4], Microsoft Academic Graph consists of more than 120M publications and related authors, venues, organizations, and fields of study [35]. New challenges beyond label annotations have arisen in terms of processing, storage, and deployment [5]. While GNNs-based contrastive learning has advanced the frontiers in many applications, current state-of-the-arts mainly requires large hidden dimensions (e.g., 512 or even 1024) to learn generalizable _full-precision_ representations [26; 20], making them memory inefficient, storage excessive, and computationally expensive especially for resource-constrained edge devices [45]. In parallel, biological neural networks continue to inspire breakthroughs in modern neural network performance, with prominent examples including spiking neural networks (SNNs) [25]. SNNs are a class of brain-inspired networks with asynchronous discrete and sparse characteristics, which have increasingly manifested their superiority in low energy consumption and inference latency [8]. Unlike traditional artificial neural networks (ANNs), which use floating-point outputs, SNNs use a more biologically plausible approach where neurons communicate via sparse and binarized representations, referred to as'spikes'. Such characteristics make them a promising choice for low-power, mobile, or otherwise hardware-constrained settings. In literature [46; 48], SNNs are proven to consume \(\sim\)100x less energy compared with modern ANNs on the neuromorphic chip (e.g. ROLLs [13]). Inspired by the success of visual research [16; 46; 15], some recent efforts have been made toward generalizing SNNs to graph data. For example, SpikingGCN [48] and SpikeNet [21] are two representative works combining SNNs with GNN architectures, in which SNNs are utilized as the basic building blocks to model the underlying graph structure and dynamics. Despite the promising potentiality, SNNs have been under-appreciated and under-investigated in graph contrastive learning. This has led to an interesting yet rarely explored research question: _Can we explore the possibilities of SNNs with contrastive learning schemes to learn sparse, binarized yet generalizable representations?_ Present work.Deviating from the large body of prior works on graph contrastive learning, in this paper we take a new perspective to address self-supervised and binarized representation learning on graphs. We present SpikeGCL, a principled GCL framework built upon SNNs to learn binarized graph representations in a compact and efficient manner. Instead of learning full-precision node representations, we learn sparse and compact 1-bit representations to unlock the power of SNNs on graph data and enable fast inference. In addition, we noticed the problem of vanishing gradients (surprisingly overlooked) during direct training of SNNs and proposed a blockwise training approach to tackle it. Although SpikeGCL uses binarized 1-bit spikes as representations, it comes with both theoretical guarantees and comparable performance in terms of expressiveness and efficacy, respectively. Compared with floating point counterparts, SpikeGCL leads to less storage space, smaller memory footprint, lower power consumption, and faster inference speed, all of which are essential for practical applications such as edge deployment. Overall, our contributions can be mainly summarized as follows: * We believe our work is timely. In response to the explosive data expansion, we study the problem of self-supervised and binarized representations learning on graphs with SNNs, whose potential is especially attractive for low-power and resource-constrained settings. * We propose SpikeGCL to learn binarized representations from large-scale graphs. SpikeGCL exhibits high hardware efficiency, significantly reducing memory consumption by \(\sim\)32x for node representations and energy consumption by up to \(\sim\)7kx. SpikeGCL is a theoretically guaranteed framework with powerful capabilities in learning node representations. * We address the challenge of training deep SNNs with a simple blockwise learning paradigm. By limiting the backpropagation path to a single block of time steps, we prevent SNNs from suffering from the notorious problem of vanishing gradients. * Extensive experiments demonstrate that SpikeGCL performs on par with or even sometimes better than state-of-the-art full-precision competitors, across multiple graph datasets of various scales. Our results hold great promise for accurate and fast online inference of graph-based systems. Figure 1: Comparison between conventional GCL and our proposed SpikeGCL. Instead of using continuous full-precision representations, e.g., 32-bit floating points, SpikeGCL produces sparse and compact 1-bit representations, making them more memory-friendly and computationally efficient for cheap devices with limited resources. We leave two additional remarks: (1) SpikeGCL is designed to reduce computation, energy, and storage costs from the perspective of compressing graph representations using biological networks, which is orthogonal to existing advances on sampling [11], data condensation [14] or pruning [19], and architectural simplifications [36; 26; 44]. (ii) To the best of our knowledge, our work is the first to explore the feasibility of implementing graph self-supervised learning with SNNs for learning binarized representations. We hope our work will inspire the community to further explore more powerful and efficient algorithms. ## 2 Related Work **Graph contrastive learning.** Self-supervised representation learning from graph data is a quickly evolving field. The last years have witnessed the emergence of a promising self-supervised learning strategy, referred to as graph contrastive learning (GCL) [23]. Typically, GCL works by contrasting so-called positive samples against negative ones from different graph augmentation views, which has led to the development of several novel techniques and frameworks such as DGI [37], GraphCL [42], GRACE [47], and GGD [44]. Recent research has also suggested that negative samples are not always necessary for graph contrastive learning, with prominent examples including BGRL [36] and CCA-SSG [43]. Negative-sample-free approach paves the way to a simple yet effective GCL method and frees the model from intricate designs. We refer readers to [23] for a comprehensive review of graph contrastive learning. Despite the achievements, current GCL designs mainly focus on task accuracy with a large hidden dimensionality [26; 20], and lack consideration of hardware resource limitations to meet the real-time requirements of edge application scenarios [45]. **Spiking neural networks.** SNNs, which mimic biological neural networks by leveraging sparse and event-driven activations, are very promising for low-power applications. SNNs have been around for a while, and although they have not gained the same level of popularity as GNNs, they have steadily increased their influence in recent years. Over the past few years, SNNs have gained significant popularity in the field of vision tasks, including image classification [46], objection detection [15], and segmentation [16]. Efforts have been paid to incorporate their biological plausibility and leverage their promising energy efficiency into GNN learning. SpikingGCN [48] is a recent endeavor on encoding the node representation through SNNs. As SNNs are intrinsically dynamic with abundant temporal information conveyed by spike timing, SpikeNet [22] then proceeds to model the temporal evolution of dynamic graphs via SNNs. In spite of the increasing research interests, the benefits of SNNs have not been discovered in GCL yet. **Binarized graph representation learning.** Model size, memory footprint, and energy consumption are common concerns for many real-world applications [1]. In reaction, binarized graph representation learning has been developed in response to the need for more efficient and compact representations for learning on large-scale graph data. Currently, there are two lines of research being pursued in this area. One line of research involves directly quantizing graphs by using discrete hashing techniques [41; 1] or estimating gradients of non-differentiable quantization process [4]. Another research line to produce binary representations using deep learning methods are those binarized networks, which are designed for fast inference and small memory footprint, in which binary representation is only a by-product [39]. On one hand, binarized graph representation learning is a novel and promising direction that aims to encode graphs into compact binary vectors that can facilitate efficient storage, retrieval, and computation. On the other hand, SNNs that use discrete spikes rather than real values to communicate between neurons are naturally preferred for learning binary representations. Therefore, it is intuitively promising to explore the marriage between them. ## 3 Preliminaries **Problem formulation.** Let \(\mathcal{G}=(\mathcal{V},\mathcal{E},\mathbf{X})\) denote an attributed undirected graph where \(\mathcal{V}=\{v_{i}\}_{i=1}^{N}\) and \(\mathcal{E}\subseteq\mathcal{V}\times\mathcal{V}\) are a set of nodes and edges, respectively. We focus primarily on undirected graphs though it is straightforward to extend our study to directed graphs. \(\mathcal{G}\) is associated with an \(d\)-dimensional attribute feature matrix \(\mathbf{X}=\{x_{i}\}_{i=1}^{N}\in\mathbb{R}^{N\times d}\). In the self-supervised learning setting of this work, our objective is to learn an encoder \(f_{\theta}:\mathbb{R}^{d}\rightarrow\{0,1\}^{d}\) parameterized by \(\theta\), which maps between the space of graph \(\mathcal{G}\) and their low-dimensional and _binary_ latent representations \(\mathbf{Z}=\{z_{i}\}_{i=1}^{N}\), such that \(f_{\theta}(\mathcal{G})=\mathbf{Z}\in\{0,1\}^{N\times d}\) given \(d\) the embedding dimension. Note that for simplicity we assume the feature dimensions are the same across all layers. Spiking neural networks.Throughout existing SNNs, the integrate-fire (IF) [31] model and its variants are commonly adopted to formulate the spiking neuron and evolve into numerous variants with different biological features. As the name suggests, IF models have three fundamental characteristics: (i) **Integrate**. The neuron integrates current by means of the capacitor over time, which leads to a charge accumulation; (ii) **Fire**. When the membrane potential has reached or exceeded a given threshold \(V_{\text{th}}\), it fires (i.e., emits a spike). (iii) **Reset**. After that, the membrane potential is reset, and here we introduce two types of reset [30]: _reset to zero_ which always sets the membrane potential back to a constant value \(V_{\text{reset}}<V_{\text{th}}\), typically zero, whereas _reset by subtraction_ subtracts the threshold \(V_{\text{th}}\) from the membrane potential at the time where the threshold is exceeded: We use a unified model to describe the dynamics of IF-based spiking neurons: **Integrate:** \[V^{t} =\Psi(V^{t-1},I^{t}),\] (1) **Fire:** \[S^{t} =\Theta(V^{t}-V_{\text{th}}),\] (2) **Reset:** \[V^{t} =\left\{\begin{array}{rl}S^{t}V_{\text{reset}}+(1-S^{t})V^{t},& \text{reset to zero},\\ S^{t}(V^{t}-V_{\text{th}})+(1-S^{t})V^{t},&\text{reset by subtraction}, \end{array}\right.\] (3) where \(I^{t}\) and \(V^{t}\) denote the input current and membrane potential (voltage) at time-step \(t\), respectively. The decision to fire a spike in the neuron output is carried out according to the Heaviside step function \(\Theta(\cdot)\), which is defined by \(\Theta(x)=1\) if \(x\geq 0\) and \(0\) otherwise. The function \(\Psi(\cdot)\) in Eq. (1) describes how the spiking neuron receives the resultant current and accumulates membrane potential. We have IF [31] and its variant Leaky Integrate-and-Fire (LIF) [10] model, formulated as follows: **IF:** \[V^{t} =V^{t-1}+I^{t},\] (4) **LIF:** \[V^{t} =V^{t-1}+\frac{1}{\tau_{m}}\left(I^{t}-(V^{t-1}-V_{\text{reset}} )\right),\] (5) where \(\tau_{m}\) in Eq. (5) represents the membrane time constant to control how fast the membrane potential decays, leading to the membrane potential charges and discharges exponentially in response to the current inputs. Typically, \(\tau_{m}\) can also be optimized automatically instead of manually tuning to learn different neuron dynamics during training, which we referred to as Parametric LIF (PLIF) [7]. In this paper, the surrogate gradient method is used to define \(\Theta^{\prime}(x)\triangleq\sigma^{\prime}(\alpha x)\) during error back-propagation, with \(\sigma(\cdot)\) denote the surrogate function such as Sigmoid and \(\alpha\) the smooth factor [21]. ## 4 Spiking Graph Contrastive Learning (SpikeGCL) In this section, we introduce our proposed SpikeGCL framework for learning binarized representations. SpikeGCL is a simple yet effective derivative of the standard GCL framework with minimal but effective modifications on the encoder design coupled with SNNs. In what follows, we first shed light on building sequential inputs for SNNs from a single non-temporal graph (SS 4.1) and depict how to binarize node representations using SNNs (SS 4.2). Then, we explain the architectural overview and detailed components of SpikeGCL one by one (SS 4.3). Finally, we employ blockwise learning for better training of deep SNNs (SS 4.4). ### Grouping node features Typically, SNNs require sequential inputs to perform the integrated-and-fire process and emit spikes. One major challenge is how to formulate such inputs from a single non-temporal graph. A common practice introduced in the literature is to repeat the graph multiple times (i.e., a given time window \(T\)), typically followed by probabilistic encoding methods (e.g., Bernoulli encoding) to generate diverse input graphs [48; 40]. However, this will inevitably introduce high computational and memory overheads, becoming a major bottleneck for SNNs to scale to large graphs. In this work, we adopt the approach of partitioning the graph data into different groups rather than repeating the graph multiple times to construct the sequential inputs required for SNNs. Given a time window \(T\)\((T>1)\) and a graph \(\mathcal{G}\), we uniformly partition the node features into the following groups1: \(\mathbf{X}=[\mathbf{X}^{1},\ldots,\mathbf{X}^{T}]\) along the feature dimension, where \(\mathbf{X}^{t}\in\mathbb{R}^{N\times\frac{d}{t}}\) consists of the group of features in the \(t\)-th partition. In cases where \(d\) cannot be divided by \(T\), we have \(\mathbf{X}^{t}\in\mathbb{R}^{N\times(d//T)}\) for \(t<T\) and \(\mathbf{X}^{T}\in\mathbb{R}^{N\times(d\ mod\ T)}\). Thus we have \(T\) subgraphs as Footnote 1: Exploring non-uniform partition with clustering methods is an important direction for future work. \[\hat{\mathcal{G}}=[\mathcal{G}^{1},\ldots,\mathcal{G}^{T}]=[(\mathcal{V}, \mathcal{E},\mathbf{X}^{1}),\ldots,(\mathcal{V},\mathcal{E},\mathbf{X}^{T})]. \tag{6}\] Each subgraph in \(\hat{\mathcal{G}}\) shares the same graph structure but only differs in node features. Note that \([\mathbf{X}^{1},\ldots,\mathbf{X}^{T}]\) are _non-overlapping_ groups, which avoids unnecessary computation on input features. Our strategy leads to huge memory and computational benefits as each subgraph in the graph sequence \(\hat{\mathcal{G}}\) only stores a subset of the node features instead of the whole set. This is a significant improvement over previous approaches such as SpikingGCN [48], as it offers substantial benefits in terms of computational and memory complexity. ### Binarizing graph representations In GCL, GNNs are widely adopted as encoders for representing graph-structured data. GNNs generally follow the canonical _message passing_ scheme in which each node's representation is computed recursively by aggregating representations ("messages") from its immediate neighbors [17; 11]. Given an \(L\) layer GNN \(f_{\theta}\), the updating process of the \(l\)-th layer could be formulated as: \[h_{u}^{(l)}=\mathrm{COMBINE}^{(l)}\left(\left\{h_{u}^{(l-1)},\mathrm{AGGREGATE} ^{(l)}\left(\left\{h_{v}^{(l-1)}:v\in\mathcal{N}_{u}\right\}\right)\right\}\right) \tag{7}\] where \(\mathrm{AGGREGATE}(\cdot)\) is an aggregation function that aggregates features of neighbors, and \(\mathrm{COMBINE}(\cdot)\) denotes the combination of aggregated features from a central node and its neighbors. \(h_{u}^{(l)}\) is the embedding of node \(u\) at the \(l\)-th layer of GNN, where \(l\in\{1,\ldots,L\}\) and initially \(h_{u}^{(0)}=x_{u}\); \(\mathcal{N}_{u}\) is the set of neighborhoods of \(u\). After \(L\) rounds of aggregation, each node \(u\in V\) obtains its representation vector \(h_{u}^{(L)}\). The final node representation is denoted as \(\mathbf{H}=[h_{1}^{(L)},\ldots,h_{N}^{(L)}]\). We additionally introduce SNNs to the encoder for binarizing node representations. SNNs receive continuous values and convert them into binary spike trains, which opens up possibilities for mapping graph structure from continuous Euclidian space into discrete and binary one. Given a graph sequence \(\tilde{\mathcal{G}}=[\mathcal{G}^{1},\ldots,\mathcal{G}^{T}]\), we form \(T\) graph encoders \([f_{\theta_{1}}^{1},\ldots,f_{\mathcal{G}_{T}}^{T}]\) accordingly such that each encoder \(f_{\theta_{\varepsilon}}^{t}\) maps a graph \(\mathcal{G}_{t}\) to its hidden representations \(\mathbf{H}^{t}\). Here we denote \(f_{\theta}=[f_{\theta_{1}}^{1},\ldots,f_{\theta_{T}}^{T}]\) with slightly abuse of notations. Then, a spiking neuron is employed on the outputs to generate binary spikes in a dynamic manner: \[\mathbf{S}^{t}=\Theta\left(\Psi(V^{t-1},\mathbf{H}^{t})-V_{\text{th}}\right), \quad\mathbf{H}^{t}=f_{\theta}^{t}(\mathcal{G}^{t}), \tag{8}\] where \(\Psi(\cdot)\) denotes a spiking function that receives the resultant current from the encoder and accumulates membrane potential to generate the spikes, such as IF or LIF introduced in SS 3. \(\mathbf{S}^{t}\in\{0,1\}^{N\times\frac{d}{t}}\) is the output spikes at time step \(t\). In this regard, the binary representations are derived by taking historical spikes in each time step with a _concatenate_ pooling [21], i.e., \(\mathbf{Z}=\left(\mathbf{S}^{1}||\cdots||\mathbf{S}^{T}\right)\). ### Overall framework We decompose our proposed SpikeGCL from four dimensions: (i) augmentation, (ii) encoder, (iii) predictor head (a.k.a. decoder), and (v) contrastive objective. These four components constitute the design space of interest in this work. We present an architectural overview of SpikeGCL in Figure 2(a), as well as the core step of embedding generation in Figure 2(b). **Augmentation.** Augmentation is crucial for contrastive learning by providing different graph views for contrasting. With a given graph \(\mathcal{G}\), SpikeGCL involves bi-level augmentation techniques, i.e., topology (structure) and feature augmentation, to construct its corrupted version \(\tilde{\mathcal{G}}\). To be specific, we harness two ways for augmentation: _edge dropping_ and _feature shuffling_. Edge dropping randomly drops a subset of edges from the original graph, while feature shuffling gives a new permutation of the node feature matrix along the feature dimension. In this way, we obtain the corrupted version of an original graph \(\tilde{\mathcal{G}}=\{\mathcal{V},\tilde{\mathcal{E}},\tilde{\mathbf{X}}\}\), where \(\tilde{\mathcal{E}}\subseteq\mathcal{E}\) and \(\tilde{\mathbf{X}}\) denotes the column-wise shuffled feature matrix such that \(\tilde{\mathbf{X}}=\mathbf{X}[:,\mathcal{P}]\) with \(\mathcal{P}\) the new permutation of features. Since we partition node features in a sequential manner that is sensitive to the permutation of input features, feature shuffling is able to offer _hard negative_ samples for contrastive learning. **Encoder.** As detailed in SS 4.2, our encoder is a series of peer GNNs corresponding to each group of input node features, followed by a spiking neuron to binarize the representations. Among the many variants of GNNs, GCN [17] is the most representative structure, and we adopt it as the basic unit of the encoder in this work. The number of individual GNNs in the encoder is relative to the number of time steps \(T\), which makes the model excessively complex and potentially lead to overfitting if \(T\) is large. We circumvent this problem by **parameter sharing**. Note that only the first layer needs to be different in response to diverse groups of features. For the remaining \(L-1\) layers, parameters are shared across peer GNNs to prevent excessive memory footprint and the overfitting issue. Predictor head (decoder).A non-linear transformation named projection head is adopted to map binarized representations to continuous latent space where the contrastive loss is calculated, as advocated in [36; 43]. As a proxy for learning on discrete spikes, we employ a single-layer perceptron (MLP) to the learned representations, i.e., \(g_{\phi}(z_{u})=\text{MLP}(z_{u}),\forall u\in\mathcal{V}\). Since the hidden representations are binary, we can instead use'masked summation' [21] to enjoy the sparse characteristics and avoid expensive matrix multiplication computations during training and inference. Contrastive objective.Contrastive objectives are widely used to measure the similarity or distance between positive and negative samples. Rather than explicitly maximizing the discrepancy between positive and negative pairs as most existing works on contrastive learning have done, the 'contrastiveness' in this approach is reflected in the diverse 'distance' naturally measured by a parameterized model \(g_{\phi}(\cdot)\). Specifically, we employ a margin ranking loss (MRL) [3] to \(g_{\phi}(\cdot)\): \[\mathcal{J}=\frac{1}{N}\sum_{u\in\mathcal{V}}\max(0,g_{\phi}(z_{u})-g_{\phi}(z _{u}^{-})+m), \tag{9}\] with the margin \(m\) a hyperparameter to make sure the model disregards abundant far (easy) negatives and leverages scarce nearby (hard) negatives during training. \(z_{u}\) and \(z_{u}^{-}\) is the representation of node \(u\) obtained from original graph sequence \(\tilde{\mathcal{G}}\) and its corrupted one, respectively. MRL forces the score of positive samples to be lower (towards zero) and assigns a higher score to negative samples by a margin of at least \(m\). Therefore, positive samples are separated from negative ones. ### Blockwise surrogate gradient learning From an optimization point of view, SNNs lack straightforward gradient calculation for backward propagation, and also methods that effectively leverage their inherent advantages. Recent attempts have been made to surrogate gradient learning [18], an alternative of gradient descent that avoids the non-differentiability of spike signals. As a backpropagation-like training method, it approximates the backward gradients of the hard threshold function using a smooth activation function (such as Sigmoid) during backpropagation. Such a technique enables us to train general forms of deep SNNs directly in a fashion similar to GNNs, while preserving their binary and dynamical nature. At present, the surrogate learning technique plays an extremely important role in advanced methods to learn SNNs properly [48; 40; 21; 6; 7]. Figure 2: Overview of proposed SpikeGCL framework. (a) SpikeGCL follows the standard GCL framework, which learns binary representation by contrasting positive and negative samples by a margin ranking loss. (b) SpikeGCL first partitions node features into \(T\) non-overlapping groups, each of which is then fed to an encoder whereby spiking neurons represent nodes of graph as 1-bit spikes. Despite the promising results, surrogate gradient learning has its own drawbacks. Typically, SNNs require relatively large time steps to approximate continuous inputs and endow the network with better expressiveness [48]. However, a large time step often leads to many problems such as high overheads and network degradation [6]. Particularly, the training of SNNs also comes with a serious vanishing gradient problem in which gradients quickly diminish with time steps. We have further discussion with respect to the phenomenon in Appendix B. These drawbacks greatly limit the performance of directly trained SNNs and prevent them from going 'deeper' with long sequence inputs. In this paper, we explore alternatives to end-to-end backpropagation in the form of surrogate gradient learning rules, leveraging the latest advances in self-supervised learning [34] to address the above limitation in SNNs. Specifically, we propose a blockwise training strategy that separately learns each block with a local objective. We consider one or more consecutive time steps as a single block, and limit the length of the backpropagation path to each of these blocks. Parameters within each block are optimized locally with a contrastive learning objective, using stop-gradient to prevent the gradients from flowing through the blocks. A technical comparison between the existing end-to-end surrogate learning paradigm and our proposed local training strategy is shown in Figure 3. By limiting the length of the backpropagation path to a single block, it solves the problem of vanishing gradients, as long as each block has a small depth/size. Furthermore, each block can be trained either simultaneously or in a purely sequential manner, offering different ways to efficiently train deep SNNs with lower memory footprints. ## 5 Theoretical guarantees There are several works on designing practical graph-based SNNs that achieve comparable performance with GNNs [22; 40; 48]. However, the basic principles and theoretical groundwork for their performance guarantees are lacking, and related research only shows the similarity between the IF neuron model and the ReLU activation [30]. In this section, we are motivated to bridge the gap between SNNs and GNNs. We present an overview of theoretical results regarding the approximation properties of SpikeGCL with respect to its full-precision counterparts through the following theorem: **Theorem 1** (Informal).: _For any full-precision GNN with a hidden dimension of \(d/T\), there exists a corresponding SpikeGCL such that its approximation error, defined as the \(\ell_{2}\) distance between the firing rates of the SpikeGCL representation and the GNN representation at any single node, is of the order \(\Theta(1/T)\)._ In the order relation \(\Theta(\frac{1}{T})\), we hide factors dependent upon the underlying GNN structure, network depth as well as characteristics of the input graph, which will be stated precisely in Theorem 1 in Appendix A. Theorem 1 suggest that with a sufficiently large node feature dimension such that we may set a large simulation length \(T\), we are able to (almost) implement vanilla GNNs with a much better computational and energy efficiency. Moreover, the approximation is defined using the firing rates of SNN outputs, which measures only a restricted set of inductive biases offered by SpikeGCL. Consequently, we empirically observe that a moderate level of \(T\) might also provide satisfactory performance in our experiments. Our analysis is presented to shed insights on the connection between the computational neuroscience model (e.g., SNNs) and the machine learning neural network model (e.g., GNNs). This connection has been analytically proven under certain conditions and empirically demonstrated through our experiments (see SS 6). It can serve as the theoretical basis for potentially combining the respective merits of the two types of neural networks. ## 6 Experiments In this section, we perform experimental evaluations to demonstrate the effectiveness of our proposed SpikeGCL framework. Due to space limitations, we present detailed experimental settings and Figure 3: A comparison between two backpropagation learning paradigms. The backpropagation path during blockwise training is limited to a single block of networks to avoid a large memory footprint and vanishing gradient problem. additional results in Appendix E and Appendix F accomplishing the submission. Code is available at [https://github.com/EdisonLeeeee/SpikeGCLfor](https://github.com/EdisonLeeeee/SpikeGCLfor) reproducibility. **Datasets.** We evaluate SpikeGCL on several graph benchmarks with different scales and properties. Following prior works [43, 36, 48], we adopt 9 common benchmark graphs, including two co-purchase graphs, i.e., Amazon-Photo, Amazon-Computer [33], two co-author graphs, i.e., Coauthor-CS and Coauthor-Physics [33], three citation graphs, i.e., Cora, CiteSeer, Pubmed [32], as well as two large-scale datasets ogbn-arXiv and ogbn-MAG from Open Graph Benchmark [12]. The detailed introduction and statistics of these datasets are presented in Appendix E. **Baselines.** We compare our proposed methods to a wide range of baselines that fall into four categories: (i) full-precision (supervised) GNNs: GCN [17] and GAT [38]; (ii) 1-bit quantization-based GNNs: Bi-GCN [39], BinaryGNN [1] and BANE [41]; (iii) contrastive methods: DGI [37], GRACE [47], CCA-SSG [43], BGRL [36], SUGRL [26], and GGD [44]; (iv) Graph SNNs: Spiking-GCN [48], SpikeNet [21], GC-SNN and GA-SNN [40]. SpikeNet is initially designed for dynamic graphs, we adapt the author's implementation to static graphs following the practice in [48, 40]. The hyperparameters of all the baselines were configured according to the experimental settings officially reported by the authors and were then carefully tuned in our experiments to achieve their best results. **Overall performance.** The results on six graph datasets are summarized in Table 1. We defer the results on Cora, CiteSeer, and Pubmed to Appendix. Table 1 shows a significant performance gap between full-precision methods and 1-bit GNNs, particularly with the unsupervised method BANE. Even supervised methods Bi-GCN and binaryGNN struggle to match the performance of full-precision methods. In contrast, SpikeGCL competes comfortably and sometimes outperforms advanced full-precision methods. When compared to full-precision GCL methods, our model approximates about 95%\(\sim\)105% of their performance capability across all datasets. Additionally, SpikeGCL performs comparably to supervised graph SNNs and offers the possibility to scale to large graph datasets (e.g., arXiv and MAG). Surprisingly, SpikeGCL achieves a new state-of-the-art performance on the MAG dataset. Overall, the results indicate that binary representation does not necessarily lead to accuracy loss as long as it is properly trained. **Efficiency.** We first provide a comparison of the number of parameters and theoretical energy consumption of SpikeGCL and full-precision GCL methods. The computation of theoretical energy consumption follows existing works [48, 46, 39] and is detailed in Appendix F. Table 6 summarizes the results in terms of model size and energy efficiency compared to full-precision GCL methods. SpikeGCL consistently shows better efficiency across all datasets, achieving \(\sim\)7k less energy consumption and \(\sim\)1/60 model size on MAG. As the time step \(T\) is a critical hyperparameter for SNNs \begin{table} \begin{tabular}{l|c c|c c c c c c} \hline \hline **Method** & **U** & **S** & **B** & **Computers** & **Photo** & **CS** & **Physics** & **arXiv** & **MAG** \\ \hline GCN [17] & & & 86.5\(\pm\)0.5 & 92.4\(\pm\)0.2 & 92.5\(\pm\)0.4 & 95.7\(\pm\)0.5 & 70.4\(\pm\)0.3 & 30.1\(\pm\)0.3 \\ GAT [38] & & & 86.9\(\pm\)0.2 & 92.5\(\pm\)0.3 & 92.3\(\pm\)0.2 & 95.4\(\pm\)0.3 & 70.6\(\pm\)0.3 & 30.5\(\pm\)0.3 \\ \hline SpikeNet [21] & ✓ & & 87.3\(\pm\)0.6 & 92.9\(\pm\)0.1 & **93.4\(\pm\)**0.2 & **95.8\(\pm\)**0.7 & 66.8\(\pm\)0.1 & - \\ SpikingGCN [48] & ✓ & & 85.9\(\pm\)0.6 & 92.6\(\pm\)0.7 & 92.6\(\pm\)0.3 & 91.5\(\pm\)0.5 & - & - \\ GC-SNN [40] & ✓ & & 88.2\(\pm\)0.6 & 92.8\(\pm\)0.1 & 93.0\(\pm\)0.4 & 95.6\(\pm\)0.7 & - & - \\ GA-SNN [40] & ✓ & & 88.1\(\pm\)0.1 & **93.5\(\pm\)**0.6 & 92.2\(\pm\)0.1 & 95.8\(\pm\)0.5 & - & - \\ \hline Bi-GCN [39] & & ✓ & 86.4\(\pm\)0.3 & 92.1\(\pm\)0.9 & 91.0\(\pm\)0.7 & 93.3\(\pm\)1.1 & 66.0\(\pm\)0.8 & 28.2\(\pm\)0.4 \\ BinaryGNN [1] & & ✓ & 87.8\(\pm\)0.2 & 92.4\(\pm\)0.2 & 91.2\(\pm\)0.1 & 95.0\(\pm\)0.1 & 67.2\(\pm\)0.9 & - \\ BANE [41] & ✓ & ✓ & 72.7\(\pm\)0.3 & 78.2\(\pm\)0.3 & 92.8\(\pm\)0.1 & 93.4\(\pm\)0.4 & \textgreater{}3days & \textgreater{}3days \\ \hline DGI [37] & ✓ & & 84.0\(\pm\)0.5 & 91.6\(\pm\)0.2 & 92.2\(\pm\)0.6 & 94.5\(\pm\)0.5 & 65.1\(\pm\)0.4 & 31.4\(\pm\)0.3 \\ GRACE [47] & ✓ & & 86.3\(\pm\)0.3 & 92.2\(\pm\)0.2 & 92.9\(\pm\)0.0 & 95.3\(\pm\)0.0 & 68.7\(\pm\)0.4 & 31.5\(\pm\)0.3 \\ CCA-SSG [43] & ✓ & & 88.7\(\pm\)0.3 & 93.1\(\pm\)0.1 & 93.3\(\pm\)0.1 & 95.7\(\pm\)0.1 & 71.2\(\pm\)0.2 & 31.8\(\pm\)0.4 \\ BGRL [36] & ✓ & & **90.3\(\pm\)**0.2 & 93.2\(\pm\)0.3 & 93.3\(\pm\)0.1 & 95.7\(\pm\)0.0 & 71.6\(\pm\)0.1 & 31.1\(\pm\)0.1 \\ SUGRL [26] & ✓ & & 88.9\(\pm\)0.2 & 93.2\(\pm\)0.4 & 93.4\(\pm\)0.0 & 95.2\(\pm\)0.0 & 68.8\(\pm\)0.4 & 32.4\(\pm\)0.1 \\ GGD [44] & ✓ & & 88.0\(\pm\)0.1 & 92.9\(\pm\)0.2 & 93.1\(\pm\)0.1 & 95.3\(\pm\)0.0 & **71.6\(\pm\)**0.5 & 31.7\(\pm\)0.7 \\ \hline SpikeGCL & ✓ & ✓ & 88.9\(\pm\)0.3 & 93.0\(\pm\)0.1 & 92.8\(\pm\)0.1 & 95.2\(\pm\)0.6 & **70.1\(\pm\)**0.1 & **32.0\(\pm\)**0.3 \\ \hline \hline \end{tabular} \end{table} Table 1: Classification accuracy (%) on six large scale datasets. The best result for each dataset is highlighted in **red**. Darker colors indicate larger performance gaps between SpikeGCL and the best results. The missing results are due to the out-of-memory error on a GPU with 24GB memory. (**U**: unsupervised or self-supervised; **S**: spike-based; **B**: binarized) to better approximate real-valued inputs with discrete spikes, we compare the efficiency of SpikeGCL and other graph SNNs in terms of accuracy, training time, GPU memory usage, and theoretical energy consumption with varying time steps from 5 to 30. The results on the Computers dataset are shown in Figure 4. It is observed that SpikeGCL and other graph SNNs benefit from increasing time steps, generally improving accuracy as the time step increases. However, a large time step often leads to more overheads. In our experiments, a larger time step can make SNNs inefficient and become a major bottleneck to scaling to large graphs, as demonstrated by increasing training time, memory usage, and energy consumption. Nevertheless, the efficiency of SpikeGCL is less affected by increasing time steps. The results demonstrate that SpikeGCL, coupled with compact graph sequence input and blockwise training paradigm, alleviates such a drawback of training deep SNNs. Overall, SpikeGCL is able to significantly reduces computation costs and enhance the representational learning capabilities of the model simultaneously. **Convergence.** We compared the convergence speed of SpikeGCL with full-precision GCL baselines on Computers, which is shown in Figure 5. It is observed that negative-sample-free methods generally converged faster than negative-sample-based ones. However, they still required sufficient epochs to gradually improve their performance. In contrast, SpikeGCL that trained in a blockwise training paradigm demonstrates a significantly faster convergence speed over full-precision methods. In particular, 1-2 epochs are sufficient for SpikeGCL to learn good representations. Overall, our experiments demonstrate the effectiveness of SpikeGCL in improving the convergence speed of SNNs and provide insights into the benefits of the blockwise training paradigm. ## 7 Conclusion In this work, we present SpikeGCL, a principled graph contrastive learning framework to learn binarized and compact representations for graphs at scale. SpikeGCL leverages the sparse and binary characteristics of SNNs, as well as contrastive learning paradigms, to meet the challenges of increasing graph scale and limited label annotations. The binarized representations learned by SpikeGCL require less memory and computations compared to traditional GCL, which leads to potential benefits for energy-efficient computing. We provide theoretical guarantees and empirical results to demonstrate that SpikeGCL is an efficient yet effective approach for learning binarized graph representations. \begin{table} \begin{tabular}{l|c c c c c c c c c} \hline \hline \multirow{2}{*}{**Method**} & \multicolumn{2}{c}{**Computers**} & \multicolumn{2}{c}{**CS**} & \multicolumn{2}{c}{**Physics**} & \multicolumn{2}{c}{**arXiv**} & \multicolumn{2}{c}{**MAG**} \\ \cline{2-10} & \#Param\(\downarrow\) & Energy\(\downarrow\) & \#Param\(\downarrow\) & Energy\(\downarrow\) & \#Param\(\downarrow\) & Energy\(\downarrow\) & \#Param\(\downarrow\) & Energy\(\downarrow\) & \#Param\(\downarrow\) & Energy\(\downarrow\) \\ \hline DGI & 917.5 & 0.5 & 4008.9 & 8 & 4833.3 & 6 & 590.3 & 5 & 590.3 & 568 \\ GRACE & 656.1 & 1.1 & 3747.5 & 17 & 4571.9 & 13 & 328.9 & 21 & 328.9 & 4463 \\ CCA-SSG & 262.4 & 17 & 1808.1 & 152 & 2220.2 & 352 & 98.8 & 78 & 98.8 & 340 \\ BGRL & 683.4 & 25 & 3749.8 & 163 & 4574.2 & 373 & 331.2 & 180 & 331.2 & 787 \\ SUGRL & 193.8 & 13 & 2131.2 & 147 & 2615.1 & 342 & 99.5 & 26 & 99.5 & 117 \\ GGD & 254.7 & 15 & 3747.3 & 140 & 4571.6 & 340 & 30.0 & 100 & 30.0 & 1400 \\ \hline Average & 490.4 & 11.9 & 3198.7 & 104.5 & 3906.0 & 237.6 & 246.4 & 68.3 & 246.4 & 1279.1 \\ SpikeGCL & 60.9 & 0.38 & 460.7 & 104.8 & 564.4 & 10.6 & 73.8 & 0.2 & 6.6 & 18 \\ \hline \hline \end{tabular} \end{table} Table 2: The parameter size (KB) and theoretical energy consumption (mJ) of various GCL methods. The row in ‘Average’ denotes the averaged results of full-precision GCL baselines. Darker color in SpikeGCL indicates a larger improvement in efficiency over the baselines. Figure 4: Comparison of SpikeGCL and other graph SNNs in terms of accuracy (%), training time (s), memory usage (GB), and energy consumption (mJ), respectively. Figure 5: Convergence speed comparison among SpikeGCL and full-precision methods. Blue: negative-sample-based methods; green: negative-sample-free methods In our extensive experimental evaluation, SpikeGCL is able to achieve performance on par with advanced baselines using full-precision or 1-bit representations while demonstrating significant efficiency advantages in terms of parameters, speed, memory usage, and energy consumption. We believe that our work is promising from a neuroscientific standpoint, and we hope it will inspire further research toward efficient graph representation learning.
2302.05950
Autoselection of the Ensemble of Convolutional Neural Networks with Second-Order Cone Programming
Ensemble techniques are frequently encountered in machine learning and engineering problems since the method combines different models and produces an optimal predictive solution. The ensemble concept can be adapted to deep learning models to provide robustness and reliability. Due to the growth of the models in deep learning, using ensemble pruning is highly important to deal with computational complexity. Hence, this study proposes a mathematical model which prunes the ensemble of Convolutional Neural Networks (CNN) consisting of different depths and layers that maximizes accuracy and diversity simultaneously with a sparse second order conic optimization model. The proposed model is tested on CIFAR-10, CIFAR-100 and MNIST data sets which gives promising results while reducing the complexity of models, significantly.
Buse Çisil Güldoğuş, Abdullah Nazhat Abdullah, Muhammad Ammar Ali, Süreyya Özöğür-Akyüz
2023-02-12T16:18:06Z
http://arxiv.org/abs/2302.05950v1
# Autoselection of the Ensemble of Convolutional Neural Networks with Second-Order Cone Programming ###### Abstract Ensemble techniques are frequently encountered in machine learning and engineering problems since the method combines different models and produces an optimal predictive solution. The ensemble concept can be adapted to deep learning models to provide robustness and reliability. Due to the growth of the models in deep learning, using ensemble pruning is highly important to deal with computational complexity. Hence, this study proposes a mathematical model which prunes the ensemble of Convolutional Neural Networks (CNN) consisting of different depths and layers that maximizes accuracy and diversity simultaneously with a sparse second order conic optimization model. The proposed model is tested on CIFAR-10, CIFAR-100 and MNIST data sets which gives promising results while reducing the complexity of models, significantly. keywords: pruning, socp, dnn, ensemble, optimization + ## 1 Introduction Machine learning has made great progress in the last few years due to advances in the use of Deep Neural Networks (DNN), but many of the proposed neural architectures come with high computational and memory requirements Blalock et al. (2020). These demands increase the cost of training and deploying these architectures, and constrain the spectrum of devices that they can be used on. Although deep learning has been very promising in many areas in terms of technology in recent years, it has some problems that need improvement. Even though deep learning can distinguish changes and subtle differences in data with interconnected neural networks, it makes it very difficult to define hyperparameters and determine their values before training the data. For this reason, different pruning methods have been proposed in the literature to reduce the parameters of convolutional networks Han et al. (2016); Hanson and Pratt (1988); LeCun and Cortes (2010); Strom (1997). The common problem of deep learning and pruning algorithms that have been studied in recent years is the decision of the pruning percentage with the heuristic approach at the pruning stage, making the success percentage of the deep learning algorithm dependent on the pruning parameter. On the other hand, the optimization models proposed with zero-norm penalty to ensure sparsity ignore the diversity of the layers as they only take into account the percentage of success. The combination of the layers that are close to each other does not increase the percentage of accuracy. As the primal example of DNNs, Convolutional Neural Networks (CNNs) are feed-forward architectures originally proposed to perform image processing tasks, Li et al. (2021) but they offered such high versatility and capacity that allowed them to be used in many other tasks including time series prediction, signal identification and natural language processing. Inspired by a biological visual perceptron that displays local receptive fields Goodfellow et al. (2016), a CNN uses learnable kernels to extract the relevant features at each processing level. This provides the advantage of local processing of information and weight sharing. Additionally, the use of the pooling mechanism allows the data to be down-sampled and the features to be invariant representations of previous features. Recently, a great deal of attention has been paid to obtain and train compact CNNs with the effects of pruning Wen et al. (2016); Zhou et al. (2016). Generally, pruning techniques differ from each other in the choice of structure, evaluation, scheduling, and fine-tuning Blalock et al. (2020). However, tuning these choices according to its costs and requirements, the network should be pruned. This is a systematic reduction in neural network parameters while aiming to maintain a performance profile comparable to the original architecture Blalock et al. (2020). Pruning of parameters in neural networks has been used to reduce the complexity of the models and prevent overfitting. Different methods on pruning in deep neural networks are proposed in the literature such as; Biased Weight Decay Hanson and Pratt (1988), Optimal Brain Damage LeCun et al. (1989) and Optimal Brain Surgeon Hassibi and Stork (1992). These aim to identify redundant parameters represented by connections within architectures and perform pruning by removing such redundancies Srinivas and Babu (2015); Han et al. (2016). In another study, a new layer-based pruning technique was proposed for deep neural networks where the parameters of each layer are pruned independently according to the second-order derivatives of a layer-based error function with respect to the relevant parameters Dong et al. (2017). Poor estimation performance is observed after pruning is limited to a linear combination of reconstructed errors occurring in each layer Dong et al. (2017). Some pruning techniques, such as unstructured pruning, work on individual parameters that induce sparsity while the memory footprint of the architecture reduces although there is not any speed advantage. Other techniques work on groups of parameters, where a neuron filter layer is considered as a separate target for the algorithm, thus provides the ability to take advantage of existing computational hardware Li et al. (2016). Evaluation of pruned parameters can be based on relevance to training metrics, absolute value, activation, and contribution of gradients Blalock et al. (2020). The architecture can be considered and evaluated as a whole Lee et al. (2019); Frankle and Carbin (2019), while parameters can be scored locally with reference to a small group of closely related groups of networks Han et al. (2016). Iterative pruning is performed by considering a subset of the architecture Han et al. (2016) and the speed of the pruning algorithm varies in a single step process Liu et al. (2019). Changing the pruning rate according to a certain rule have also been tried and implemented in the literature Gale et al. (2019). After applying the pruning algorithm, some techniques continued to use the same training weights; while the reduced architecture obtained in the studies was retrained to a certain state Frankle and Carbin (2019) or initialized and restarted from the initial state Liu et al. (2019). Deep Ensembling techniques increase reliability of DNNs as multiple diverse models are combined into a stronger model Fort et al. (2019). It was shown that members of the deep ensemble models provide different predictions on the same input increasing diversity Garipov et al. (2018) and providing more calibrated probabilities Lakshminarayanan et al. (2017). In one of the recent studies, ensemble technique is chosen to increase the complexity by training end-to-end two EfficientNet-b0 models with bagging. Adaptive ensemble technique is used by fine-tuning within a trainable combination layer which outperforms different studies for widely known datasets such as CIFAR-10 and CIFAR-100 Bruno et al. (2022). Since pre-trained models boosts efficiency while simplifying the hyperparameter tuning, increasing the performance on these datasets are achieved with the help of transfer learning and pre-training Kolesnikov et al. (2020); Sun et al. (2017). Researchers accelerated CNNs but get lower performance metrics on CIFAR-100. One of these techniques is FATNET algorithm where high resolution kernels are used for classification while reducing the number of trainable parameters based on Fourier transform Ibadulla et al. (2022). Ensemble pruning methods are used to reduce the computational complexity of ensemble models, and remove the duplicate models existing in the ensemble. Finding a small subset of classifiers that perform equivalent to a boosted ensem ble is an NP-hard problem Tamon & Xiang (2000). Search based methods can be used to select an ensemble as an alternative. These methods conduct a search in the ensemble space and evaluate performance of various subsets of classifiers Sagi & Rokach (2018). Deep learning can actually be thought as multi-layer artificial neural networks, which is an example of ensemble learning. The overall percentage of success for ensemble learning is proportional to the percentage of average accuracy of the ensemble and the diversity of each learner within the ensemble. However, the percentage of accuracy and diversity have trade-offs among themselves. In other words, an increase in the percentage of accuracy within the community causes a decrease in diversity, which means redundant merge of similar layers. It has been shown that a pruning problem can be reframed to a quadratic integer problem for classification to seek a subset that optimizes accuracy-diversity trade-off using a semi-definite programming technique Zhang et al. (2006). In the vein of that research, multiple optimization models have been proposed to utilize the quadratic nature of pruning problems. Sparse problem minimization and low-order approximation techniques have recently been used in numerous fields such as computer vision, machine learning, telecommunications, and more Quach et al. (2017). In such problems, including zero-norm corresponds to sparsity which makes the objective function non-convex in the optimization problem. For this reason, different relaxation techniques have been proposed in the optimization literature for non-convex objective functions. For class of a non-convex quadratic constrained problem, SDP relaxation techniques have been developed based on the difference of two convex functions (Difference of Convex - DC) decomposition strategy Zheng et al. (2011); Ozogur-Akyuz et al. (2020). However, since SDP algorithms for high-dimensional problems are slow and occupy a lot of memory, SDP algorithms have been relaxed into quadratic conic problems. Second-Order Cone programming (SOCP) is a problem-solving class that lies between linear (LP) or quadratic programming (QP) and semi-definitive programming (SDP). Like LP and SDP, SOCPs can be solved efficiently with primal dual interior-point methods Potra & Wright (2000). Also, various en gineering problems can be formulated as quadratic conic problems Lobo et al. (1998). In the literature, new convex relaxations are suggested for non-convex quadratically constrained quadratic programming (QCQP) problems related to this issue. Since basic semidefinite programming relaxation is often too loose for general QCQP, recent studies have focused on enhancing convex relaxations using valid linear or SOC inequalities. Valid second-order cone constraints were created for the non-convex QCQP and the duality difference was reduced by using these valid constraints Jiang & Li (2019). In addition, a branch and bound algorithm has been developed in the literature to solve continuous optimization problems in which a non-convex objective function is minimized under non-convex inequality constraints that satisfy certain solvability assumptions Beck & Pan (2017). In this study, we propose a novel optimization model to prune the ensemble of CNNs with a Second-Order Cone Programming (SOCP) model. The proposed optimization model selects the best subset of models in the ensemble of CNNs by optimizing the accuracy-diversity trade-off while reducing the number of models and computational complexity. In the following chapters; definition of the loss function and relaxations for SOCP model will be introduced in Section 2, developing the ensemble of CNNs with hardware, software specifications, datasets and results will be given in Section 3 and lastly, the conclusions and future work will be mentioned in Section 4, respectively. ### Contribution of The Study The contributions of the proposed study are listed as follows; * One of the unique values that this study will add to the literature is the fact that the parameter called the pruning percentage, which will meet the needs of the deep learning literature, will be directly obtained by the proposed second-order conical optimization model. * Another important original value will be a solution to the ignorance of the diversity criterion in the literature for pruning deep learning networks only on the basis of accuracy performance. The objective function in the proposed optimization model simultaneously optimizes the accuracy and diversity criteria by pruning the ensemble of CNNs. * Since SOCP models gives more successful and faster results than the other sparse optimization models. Hence, the number of models in the ensemble of CNNs will be reduced significantly by using the proposed SOCP model. ## 2 Methods and Materials Suppose that in a system consisting of \(M\) binary classifiers where \(i_{th}\) classifier calculates the probability that a selected sample belongs to Class 1, as \(f_{i}^{1}\). In this system, to find a consensus over all classifiers a weighted majority within the ensemble can be defined as Polikar (2009): \[f_{ens}=\sum_{i=0}^{M-1}\omega_{i}f_{i}^{1}. \tag{1}\] In the Equation (1), \(\omega_{i}\) refers to weights which express the success of the classifier and these \(\omega_{i}\) values are determined through the optimization model that will be introduced next in Equation (2). If we denote the output distributions of the classifiers with \(p_{i}\), \(f_{gt}\in\{0,1\}\) where \(f_{gt}\) is the ground truth, we can define the loss function of the ensemble as follows: \[L_{ens}\left(f\right)=\alpha\left(f_{ens}-f_{gt}\right)^{2}+\left(1-\alpha \right)\left(1-\left(H\left(\sum_{i=0}^{M-1}\omega_{i}p_{i}\right)-\sum_{i=0} ^{M-1}\omega_{i}H(p_{i})\right)\right). \tag{2}\] In Equation (2), the first term defines the error, and the second term represents \(\left(1-\text{diversity}\right)\). The minimization of \(L_{ens}\) is maximization of accuracy and diversity while the trade-off between these two terms is determined by \(\alpha\). The Equation (2) can be rewritten for multi-class classification problems where \(C\) is the number of classes and \(f_{gt}\in\{0,1\}^{C}\) represents the correct classification vector: \[\begin{split} L_{ens}=\alpha\sum_{j=0}^{C-1}\frac{\left(f_{ens,j}-f _{gt,j}\right)^{2}}{C}+\\ (1-\alpha)\left(1-\frac{\sum_{j=0}^{C-1}\left(H\left(\sum_{i=0}^{M -1}\omega_{i}p_{i,j}\right)-\sum_{i=0}^{M-1}\omega_{i}H\left(p_{i,j}\right) \right)}{C}\right).\end{split} \tag{3}\] Here, the first term refers to the accuracy (quality) while the second term refers to the \((1-\text{diversity})\) defined by the Jensen Shannon Entropy function (\(H(x)=-x\log x\)). The hyperparameter \(\alpha\) provides the trade-off between accuracy and diversity terms. Additional information for the convex form of Shannon Entropy can be found in (A). Thus, ensemble pruning model which optimizes accuracy and diversity can be written as; \[\min_{\omega}\quad L(\omega;\alpha) \tag{4}\] where \(\alpha\) is determined through cross validation. The solution of the Equation (4) determines weights for each model in the ensemble. In order to obtain sparse weights, we introduced a regularization term: \[\min_{\omega}\quad L(\omega;\alpha)+\lambda\|\omega\|_{0}. \tag{5}\] The Equation (5) can be written in a more generalized form: \[\min_{x}f(x)+\lambda\|x\|_{0}. \tag{6}\] The \(f(x)\) used in this Equation 6 is a convex quadratic function as using the Shannon Entropy function above in convex form where conceptualization is given in A. In the Equation 6, \(\|x\|_{0}\) is the zero-norm for sparsity and \(\lambda\) is defined as regularization term. Since the proposed model will be applied to highly correlated and large ensemble of CNNs, a slightest change within the dataset will cause a large error which is called overfitting. In order to avoid this problem, memory usage and model size are reduced by applying regularization term \(\lambda\) to the problem. The zero-norm \(\|x\|_{0}\) is preferred to ensure sparsity and to penalize the number of non-zero entries of the coefficient vector. Here, the regularization term reflects non-zero inputs in the coefficient vector, at which quantity is usually the zero-norm of the vector. However, it is not a valid norm to use since zero-norm is not differentiable and the best convex approach for \(\|x\|_{0}\) is \(\|x\|_{1}\) which is convex Ramirez et al. (2013). Thus, the \(\|x\|_{0}\) norm is relaxed by using the \(\|x\|_{1}\) norm as the best approximation and the Equation 6 can be rewritten as: \[\min_{x}f(x)+\lambda\|x\|_{1}, \tag{7}\] where the minimization problem becomes a convex quadratic problem. The Equation (7) can be written as a SOCP problem by algebraic reformulations as follows: \[\|x\|_{1}=\sum_{i=1}^{n}|x_{i}|\leq u,\] where \((x_{i})\in K_{q}^{2}\), \(i=1,\ldots,n\) and \(u\) is defined to be an additional independent variable. Thus, by restricting the variable \(x\) into the cone and adding the additional constraint \(x\in K\), the Equation 7 becomes: \[\min f(x)+\lambda u\] s.t. \[\sum_{i=1}^{n}|x_{i}|\leq u,\] \[x\in K.\] A convex cone \(K\) such that \(K=K^{n_{1}}x\ldots xK^{n_{r}}xR^{n_{i}}\) and \(x_{i}\in K^{n},K^{n_{i}}\) is defined as a quadratic cone with dimension \(n_{i}\): \[K^{n_{i}}=x_{i}=\begin{bmatrix}x_{i_{1}}\\ x_{i_{0}}\end{bmatrix}\in R^{n_{i}-1}xR:\|x_{i_{1}}\|_{2}\leq x_{i_{0}},\] where \(\|x_{i_{1}}\|_{2}\) is the standard Euclidean norm. Based on relaxation techniques given in B, an additional term \(t=x^{T}Qx\) is introduced as: \[\min t+c^{T}x\] s.t. \[x^{T}Qx\leq t,\] and constraints are transformed as follows: \[t\geq x^{T}Qx,\] \[0\geq x^{T}Qx-t,\] \[0\geq 4x^{T}Qx-4t,\] \[0\geq 4x^{T}Qx+(1-t)^{2}-(1+t)^{2},\] where \(t\geq 0\). Hence, \(1+t\geq 0\) can be modified to the problem as an additional constraint where \(t\geq 0\), \(x^{T}Qx\geq 0\), \(t\geq x^{T}Qx\) and since \(Q\) is positive definite, Cholesky factorization \(Q=LL^{T}\) can be applied Higham et al. (1990): \[1+t\geq\sqrt{4x^{T}Qx+(1-t)^{2}},\ 1+t\ \geq 0\] \[1+t\geq 2Lx1-t,1+t\ \geq 0\] \[\begin{bmatrix}1+t\\ 2Lx\\ 1-t\end{bmatrix}\ \geq 0.\] Accordingly, the Equation (6) can be defined as a Second-Order Cone problem: \[\begin{array}{rl}\min&(\alpha)(t)+(1-\alpha)(c^{T}x)+\lambda u\\ \text{s.t.}&\begin{bmatrix}1+t\\ 2Lx\\ 1-t\end{bmatrix}\ \geq 0,\\ &\sum_{i=1}^{n}|x_{i}|\leq u,\\ &x\in K,\end{array} \tag{8}\] where \(K=K^{n_{i}}\ldots K^{n_{r}}R^{n_{i}}\) when \(K^{n_{i}}\) is either a second-order cone or a rotated second-order cone \(K^{n_{i}}_{r}\) while \((x_{i})\in K^{2}_{q}\). ## 3 Experiments and Results Traditionally, the generation of ensembles uses bootstrap aggregation (bagging) techniques, however it was highlighted that random initialization of deep models in the ensemble is more effective than bagging approaches Lee et al. (2015). These ensembles of randomly initialized DNNs explore the function space of the architectures more extensively covering a wider span of the accuracy-diversity plane and strongly competing with methods such as weight averaging Izmailov et al. (2018) and local Gaussian approximations Nguyen-Tuong et al. (2009) that produce models with low diversity in the function space. In this study, to achieve a statistical measure for the ensemble with the desired diversity and accuracy it was chosen to generate 300 CNN models with a process of random selection on the following design specifications: * Convolution Blocks: Each convolution block consists of two convolution layers, one batch normalization layer, ReLU (Rectified Linear Unit) non-linearity activation function and one pooling layer. The range of blocks was randomly selected from 3 to 5. * Convolution Filters: For each convolution layer the number of convolution filters is subject to randomization with a range of 32 to 256 with a step size of 32. * Pooling Mechanism : Two pooling mechanisms are randomly chosen from average pooling and maximum pooling. * Classification Head (Fully Connected layer) size: The fully connected (Dense) layer has its size (number of neurons) randomized in range of 20 to 100 with a step of 10. * Drop Out Layer Percentage : The drop out regularization layer has its percentage randomized in range of 0 (no Drop Out) to 0.5 with a step of 0.1. * Learning Rate: The optimizer learning rate was randomized in a range of \(1e^{-4}\) to \(1e^{-2}\) with logarithmic sampling. * All CNN models are trained using ADAM optimizer and Sparse Categorical Cross-Entropy loss function. ### Hardware, Software and Data Sets The available hardware system is a desktop personal computer with Intel i9-9900X CPU, 64 Gigabytes of system RAM, NVIDIA RTX 2080ti GPU with 12 Gigabytes of RAM and running an _UBUNTU_ 20 LTS operating system. The utilized software platform is _Tensorflow-GPU_ (ver. 2.4.1) supported by CUDA toolkit (ver. 10.1.243) and cuDNN (ver. 7.6.5). All implementations performed on _Python_ environment (ver. 3.9.5). Utilized data sets are of image classification tasks where three have been chosen with differences in input size and number of classes to verify the robustness of the proposed method to changes in data distribution. Datasets chosen are as follows: * **CIFAR-10:** This data set consists of 60000 32x32 resolution colour images in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images Krizhevsky et al. (a). * **CIFAR-100:** As with CIFAR-10 the data set consists of 60000 32x32 resolution colour images, in 100 classes, resulting in 600 images per class. Similarly to CIFAR-10, There are 50000 training images and 10000 test images Krizhevsky et al. (b). * **MNIST:** This data set contains 70000 28x28 resolution grey scale handwritten digits in 10 classes. There are 60000 training images and 10000 test images LeCun & Cortes (2010). In the generation of the ensemble CNN models, the chosen data sets were utilized in the standard train, validation and test splits. The prediction classes and probabilities were recorded to constitute the data for the SOCP process. The samples of CIFAR-10 and CIFAR-100 is splitted as 40k for training and 10k for validation while MNIST data set is divided into 40k samples for training and 20k for validation. All test splits were comprised of 10k samples. The pseudo code of proposed ensemble pruning approach by SOCP modeling is given below in Algorithm 1. In the Equation (8),\(\alpha\) and \(\lambda\) values are hyperparameters that need to be tuned by using cross-validation. The decided range for these parameters for each dataset was \(\alpha=(0,1;0,2;0,3;0,4;0,5)\) and for \(\lambda=(0,1;0,3;0,5;0,7;0,9)\) where the best values are determined by cross-validation. The solution of the SOCP model given by Equation (8) produces the weight for each model. After obtaining the sparse weights, the CNN models corresponding to non zero weights are chosen within the ensemble which lead to the subensemble. The final decision of the classification was performed by voting among the models in the subensemble. Several thresholds were experimented during validation which leads to different pruning extents (number of remaining models) and it was shown that favorable results were maintained as indicated in Table (1). Threshold values were decided on the training set for weights approaching zero in sparse solutions of the Equation (8). The predictions of the selected models are then combined through a Voting aggregation to obtain the final predictions that are used to get test scores. The sparse solution of the Equation (8), the weight values closest to zero were taken and voting is performed by choosing CNN models in the corresponding ensemble. The voting corresponds to the weighted sum equation which is given in Equation (1) since the target values are class labels in the classification problem. The weighted sum can be taken directly in a regression problem. In Table 1, the prediction values (test accuracy) of the model on the test set; number of models and threshold values on pruned and full ensemble models with voting are given. Table 2 shows maximum accuracy estimation (Max. Accuracy score on test set), average accuracy estimation (Avg. Accuracy score on test set) and minimum accuracy estimation of pruned ensemble models and full ensemble models without using the voting method. The resulting 56 pruned models for CIFAR-10, 98 pruned models for CIFAR-100 and 170 pruned models for MNIST performed close to performance of full models which is 300 and it can be seen that pruned models are more advantageous in terms of time and complexity than unpruned models. ## 4 Conclusions and Future Work This study aims to develop a mathematical model which prunes the ensemble of CNNs which are in response to the need in the literature with sparse SOCP. The strength of this paper is improving the performance and decreasing the complexity of the ensemble of CNNs with pruning. An important part to be considered in modeling the pruning algorithm is the accuracy performance of the networks at different depths which will be randomly generated and the richness of diversity among each other. However, there is a trade off between these two terms, in other words, the diversity is compromised as the percentage of accuracy increases. The mathematical model which is developed in this study is the pruning algorithm that keeps this trade off at the optimum level. The proposed pruning algorithm is a generic algorithm by being independent of the domain of the data set. Success rates are compared of pruned and full ensemble models by applying them in different areas of data sets. It has been observed that the difference between experiments is around 2% and it can be said that the system is robust. Since the number of models are reduced significantly, similar or better accuracy scores are obtained in all data sets. Recent studies in the literature Bruno et al. (2022); Kolesnikov et al. (2020); Sun et al. (2017) leads to the conclusion that the use of pre-trained models and transfer learning increases the performance of the model and ensembling techniques outperform other state of the art models. For this reason, it is planned to test the SOCP model on an ensemble that will be created by using the models in the literature instead of the ensemble we randomly generated as a future work.
2302.13854
A Deep Neural Network Based Reverse Radio Spectrogram Search Algorithm
Modern radio astronomy instruments generate vast amounts of data, and the increasingly challenging radio frequency interference (RFI) environment necessitates ever-more sophisticated RFI rejection algorithms. The "needle in a haystack" nature of searches for transients and technosignatures requires us to develop methods that can determine whether a signal of interest has unique properties, or is a part of some larger set of pernicious RFI. In the past, this vetting has required onerous manual inspection of very large numbers of signals. In this paper we present a fast and modular deep learning algorithm to search for lookalike signals of interest in radio spectrogram data. First, we trained a B-Variational Autoencoder on signals returned by an energy detection algorithm. We then adapted a positional embedding layer from classical Transformer architecture to a embed additional metadata, which we demonstrate using a frequency-based embedding. Next we used the encoder component of the B-Variational Autoencoder to extract features from small (~ 715,Hz, with a resolution of 2.79Hz per frequency bin) windows in the radio spectrogram. We used our algorithm to conduct a search for a given query (encoded signal of interest) on a set of signals (encoded features of searched items) to produce the top candidates with similar features. We successfully demonstrate that the algorithm retrieves signals with similar appearance, given only the original radio spectrogram data. This algorithm can be used to improve the efficiency of vetting signals of interest in technosignature searches, but could also be applied to a wider variety of searches for "lookalike" signals in large astronomical datasets.
Peter Xiangyuan Ma, Steve Croft, Chris Lintott, Andrew P. V. Siemion
2023-02-24T04:28:46Z
http://arxiv.org/abs/2302.13854v2
# A Deep Neural Network Based Reverse Radio Spectrogram Search Algorithm ###### Abstract We developed a fast and modular deep learning algorithm to search for lookalike signals of interest in radio spectrogram data. First, we trained an autoencoder on filtered data returned by an energy detection algorithm. We then adapted a positional embedding layer from classical Transformer architecture to a frequency-based embedding. Next we used the encoder component of the autoencoder to extract features from small (\(\sim 715\,\mathrm{Hz}\) with a resolution of \(2.79\,\mathrm{Hz}\) per frequency bin) windows in the radio spectrogram. We used our algorithm to conduct a search for a given query (encoded signal of interest) on a set of signals (encoded features of searched items) to produce the top candidates with similar features. We successfully demonstrate that the algorithm retrieves signals with similar appearance, given only the original radio spectrogram data. keywords: Deep Learning - Machine Learning - Signal Processing - Radio Astronomy - Technosignatures ## 1 Introduction ### Interference Rejection in Radio Astronomy The rejection of radio frequency interference (RFI) is a perennial challenge for radio astronomy, particularly given the increase in satellite constellations that transmit at a range of radio frequencies and are detectable even at remote observing sites. RFI rejection is traditionally performed using statistical or machine learning techniques that risk rejecting a potential signal of interest (SOI) in the process of flagging RFI signals (Pinchuk and Margot, 2022). This is a particular problem when searching for astrophysical transients or for technosignatures, since the signal morphology may not be known ahead of time and it may be rejected before a human has the opportunity to review it. Novel deep learning algorithms have been successfully used to address these kinds of problems (Ma et al., 2023). An alternative approach, which is relatively unexplored, is to start with the SOI, search through some database of signals (including RFI), and if the signals match, to then decide whether to keep or discard the SOI. This is of particular interest in technosignatures searches where the vetting of candidate signals (e.g. BLC1; Sheikh et al., 2021) is rather onerous using existing methods. The ability to find additional examples of a given SOI at different times or different parts of the observed band can be invaluable in determining the nature of the signal. The focus of this paper is to develop an algorithm that allows us to reverse search a given SOI and return similar signals. We take inspiration from well known reverse image search algorithms. ### Classical Reverse Image Search Algorithms Reverse image searches are commonly employed in search engines and social media. These algorithms take an input image as a search term and return the locations of the same or similar images from across the web. These sites use algorithms like Scale-Invariant Feature Transforms (SIFT; Lowe, 1999), Maximally Stable Extremal Regions (MSER; Matas et al., 2004) or Bag of Words/Vocabulary Tree (Csurka et al., 2004) to power their search. Here we will briefly review some of these approaches and extend them to our work. #### 1.2.1 Scale-Invariant Feature Transforms A Scale-Invariant Feature Transform (SIFT) is an algorithm that takes an image, locates special "locally distinct points", and describes the features around such points using measurements such as the gradient in intensity or maximum intensity, etc. (Lowe, 1999). This process produces pairs of vectors: the coordinates of the locally distinct point, and the corresponding descriptor vector that contains the features of that point. The locally distinct points are produced from a process of difference of Gaussians performed at varying resolutions of the images. To compute the descriptor vector we measure the gradients in pixel intensity over a region about the locally distinct points. Together this builds a feature extractor. Finally, to perform a search, we match the key points between images and check the similarity between the descriptor vectors. More concretely we can use a technique called a Vocabulary Tree, described in section 1.2.3. #### 1.2.2 Maximally stable extremal regions The maximally stable extremal regions (MSER) algorithm, like SIFT, attempts to find key points in the image. Specifically, it looks for objects called "blobs", defined as areas of an image that have connected elements, contrasting backgrounds, and close-to-uniform pixel intensities (Matas et al., 2004). MSER works by taking various thresholds in the range (0, 255) and blacking or whitening out pixels depending on this threshold. The blacking and whitening of pixels create the "blobs". We perform a connected component analysis and track how the blobs evolve as we adjust the threshold. MSER helps pull out small regions of images that contain distinctive boundaries which we can use to label distinct points and compute descriptor vectors for each blob. These can then be used to compare and match. #### 1.2.3 Bag of Words / Vocabulary Tree The Vocabulary Tree approach in computer vision effectively implements a Bag of Words (BoW) model to compare images to each other (Csurka et al., 2004). We first run a local feature extractor like SIFT (section 1.2.1). We then construct a "codebook" containing codewords that describe several similar patches identified by SIFT. One simple means of determining codewords is to use K-Means clustering to produce centroids, which we denote as codewords. We can then compare and contrast images by comparing the corresponding codebooks using a variety of algorithms. However, a key limitation in this approach is that SIFT ignores the spatial relationships between local patches, which we know is incredibly important in describing an image accurately. This leaves room for improvement. ### Deep Learning for Computer Vision Deep learning overcomes some of the shortcomings of the aforementioned algorithms and has been proven to effectively solve a wide range of image problems with the advent of Convolutional Neural Networks (CNN; LeCun et al., 1999). CNN's are simple in that they are a traditional neural network with the addition of convolutional layers. These layers operate by performing convolution operations between input data and a kernel. These operations have built-in inductive bias. For example, these operations are equivalent to translations that the algorithm exploits as fundamental properties in images. More concretely, this is because in images, small translations often do not change the prediction outcomes (Goodfellow et al., 2016). For example, if we take a picture of a dog and move the dog 5 pixels to the right, the image is still that of a dog. By using a convolutional layer we are baking this assumption into the model without getting the model to learn it from scratch. More formally this allows us to restrict the priors on the weights thus reducing the size of the model and allowing vast improvements in both performance and scale. More specifically CNN's have been proven to be efficient feature extractors of both local and global features which address the shortcomings described in Section 1.2.3. Due to the exceptional performance of CNN's on a wide range of image tasks, they have been widely adopted in industry for a variety of computer vision challenges (Krizhevsky et al., 2017; Koul et al., 2019) including for reverse image search algorithms (Singh and Gowdar, 2021). #### 1.3.1 Autoencoders Autoencoders are special cases of CNN's, employing a symmetrical CNN architecture that takes in an input and outputs a reconstruction of the original input. However, these models have a bottleneck that constricts the flow of data (Baldi, 2012). This bottleneck divides the network into two: an encoder and a decoder. The encoder takes an input image and compresses the data through the bottleneck and the decoder attempts to reverse that process. Intuitively this builds an efficient feature extractor since the encoder is attempting to reduce the input into the most important features such that the decoder can reconstruct the original input (Goodfellow et al., 2016). This kind of automatic feature extraction technique will serve as the core of our approach. ### Problem Statement Despite these algorithms being industry-standard, out-of-the-box solutions fail to satisfy our needs. Firstly, these traditional (not deep learning) techniques disregard large-scale spatial features, as described in section 1.2.3. Traditional algorithms fail to capture complex structures within the data. This is partially the reason why traditional computer vision techniques are being superseded by deep learning in tackling computer vision problems. Secondly, under certain circumstances, we want to bias our model towards particular features, for example, frequency. Currently, there is no encoding of information on the radio frequency of the signal in any of these methods. During our search process, sometimes we wish to match signals that not only appear similar to each other, but also are located in similar regions of the frequency band. On the surface, this appears to be a trivial problem: simply filter out the candidates after the search is completed. However, there exists a trade-off between visually similar signals and signals that are just close by in frequency. Thus if one wants to search for both similar appearing signals and signals close by in frequency we need to "bake in" the information about the frequency into the search process rather than apply it as a secondary filter. Classical search algorithms do not have an out-of-the-box solution to embedding complex features such as signal frequency. The central questions we want to answer are: _Given a signal in a radio spectrogram, can we find all lookalike signals? Can we also find similar signals in similar regions of the frequency band? Can we make these approaches modular?_ ## 2 Methods We begin by outlining the structure of our approach. First, we build a feature extractor using an autoencoder. To do so we describe our data source in section 2.1, the cleaning of the data in section 2.2, and data preprocessing in section 2.3. We then move to build the model in 2.4. We train and test the model in section 2.5. We extend the model capabilities using a _modular_ frequency embedding strategy discussed in 2.6, and finally assemble the algorithm in section 2.7. ### Data Source The training, validation, and testing dataset are derived from real observations from Breakthrough Listen's Green Bank Telescope \(1.8-2.9\,\mathrm{GHz}\) dataset, sourced from several different observational campaigns (Enriquez et al., 2017; Price et al., 2018; Price et al., 2020; Price et al., 2021). We used the high-frequency resolution data product where each frequency bin is \(2.79\,\mathrm{Hz}\) wide (Lebofsky et al., 2019), since our goal is to tackle data that contains RFI, and RFI can have fine resolution in the frequency domain. The data is open source and are available from the Breakthrough Listen Open Data Archive1. Footnote 1: [http://seti.berkeley.edu/opendata](http://seti.berkeley.edu/opendata) ### Data Filtering Energy Detection Since we are working with real data, we need to balance the training features. Large regions of the spectrograms consist of Gaussian noise, so if we did not filter the data to extract signals, then the resulting model would be biased towards generating noise rather than real signals. To perform this filtering we apply a simple energy detection process2. Footnote 2: [https://github.com/FXi96/SEII-Energy-Detection](https://github.com/FXi96/SEII-Energy-Detection) First we perform a simple bandpass filtering on the entire observation. This removes the polyphase filterbank shape imposed by the first of a two-stage channelization procedure (Lebofsky et al., 2019). This is done by collapsing the spectrogram in the time dimension and fitting a piece-wise polynomial spline to each \(\sim 3\,\mathrm{MHz}\)-wide coarse channel. We subtract the fitted polynomial from the data. Finally, we iterate through windows of size \(715\,\mathrm{Hz}\) and search for excess energy above the expectations of Gaussian random noise. We chose an S-score threshold of \(512\)(D'Agostino and Pearson, 1973). We also perform a complementary energy detection process where we select regions consistent with Gaussian noise by inverting the threshold condition. When constructing the training set we use an equal number of "signal" and "noise" regions, thus balancing the dataset. ### Data and Preprocessing To create the training set we randomly draw 30 observations from a total of \(12{,}000\), and draw an additional six observations for the test set (excluding those already selected as part of the training set). We chose \(30\) and \(6\) to keep computing time reasonable. Running energy detection required 30 minutes to process each file and with 30 examples it already provided more than 2 million training samples which is more than enough to demonstrate our algorithm's capabilities. The spectrograms have a time resolution of \(18.25\,\mathrm{s}\) and a frequency resolution of \(2.79\,\mathrm{Hz}\), giving the dataset \(3\times 10^{8}\) frequency channels. We then split the band into \(\sim 715\,\mathrm{Hz}\) "snippets". This resulted in a training set of approximately 2 million snippets and a test set of 300,000 snippets. We then log normalize the data, add a constant to make all the values positive, and scale the data to have a final range within 0 and 1. Examples of the resulting snippets are shown in Figure 1. Note that the normalization is done per snippet independently of each other sample. These snippets are used as inputs to the autoencoder, and the targets are the same snippets. ### Autoencoder As described in section 1.3.1, our autoencoder consists of an encoder and a decoder (Baldi, 2012). The encoder consists of five convolutional layers (LeCun et al., 1999), with filter sizes of \(16\), \(32\), \(32\), \(32\), and \(32\) respectively. In between each layer is a 2-D Maxpool layer (Krizhevsky et al., 2017) of size (1,2). A batch normalization layer (Ioffe and Szegedy, 2015) is included between the final maxpool layer and the convolutional layer. This is followed by three dense layers of size \(32\), \(16\), and \(5\) respectively. All the activations used are ReLu (Fukushima, 1975) activations. The model was built using Tensorflow (Abadi et al., 2015) and Keras (Chollet et al., 2015). The decoder is similar but in reverse order. We once again have dense layers of size \(16\), and \(32\) where the output is then reshaped and fed into a convolutional transpose (Baldi, 2012) layer which upscales the image back to the same dimension of the input. Between the five convolutional layers, we have batch normalization and maxpool size of (2,1). The model architecture is shown in Figure 2. ### Training and Validation The training scheme is standard. We fitted the model using the ADAM (Kingma and Ba, 2014) optimizer with a learning rate of \(1\times 10^{-4}\), utilizing an early stopping routine with the patience of 10 epochs trained in batches of 16 samples, for 100 epochs. We then visually evaluated how well the model reconstructed the original spectrogram. Figure 1 shows a few randomly drawn examples. The reconstruction achieves our desired level of accuracy in that visually the reconstructions appear similar to the inputs. We proceed with this model (Figure 3). ### Frequency Embedding Signal morphology varies with frequency, given that transmitters of different types occupy different regions of the band. For example, WiFi and Bluetooth signals are common in our spectra around \(2.4\,\mathrm{GHz}\). However, during our training phase and in the construction of our model this frequency information is not preserved in the feature extractor. Should a user choose to search for signals that are also similar in frequency this information would be lost. However, we can add frequency information to the feature vector. Initially, this appears trivial -- one can simply extend the feature vector by a dimension and add the frequency information here. However, this would mean that frequency is an orthogonal feature to all the other extracted features. We know, however, that signal morphology is correlated to some extent with frequency. Here we borrow techniques from Natural Language Processing (specifically transformer architectures) to encode frequency information in feature vectors called Positional Embeddings (Vaswani et al., 2017). Positional embeddings work by taking encoded feature vectors and perturbing these vectors by small offsets based on their position to obtain an ideal balance between correct adjustment and over-adjustment which could lead to confusion with other signals. One can imagine this as trying to build miniature clusters in some high-dimensional feature space where each element of the cluster is unique but the elements of the cluster don't intersect with other clusters. The adjustment vector is found using Equation 1, adapted from the paper "Attention Is All You Need" (Vaswani et al., 2017), \[P(k,i)=\begin{cases}\sin(\frac{k}{n^{2}}),&i\,\,\mathrm{even}\\ \cos(\frac{k-k}{n^{2}}),&i\,\,\mathrm{odd}\end{cases} \tag{1}\] The variable \(k\) is the index in position, so if we have a sequence of length \(L\), we pick an integer \(k\in[0,L]\). The index in the feature vector is denoted \(i\), and \(i\in[0,d]\) where \(d\) is the dimension of the embedding space. The variable \(n\) is a tunable hyperparameter, where Vaswani et al. choose \(n=10,000\). These equations ought to satisfy two conditions: 1. The adjustments are unique for a given frequency index \(k\). 2. The adjustments are bounded, in order to prevent the adjustment vector from over-adjusting. The functions are all bounded between \([0,1]\) satisfying condition (ii). The sinusoidal functions are unique to each position, satisfying Figure 1: A random sample of training examples. A variety of RFI signals are seen in these snippets. Figure 3: Eight randomly drawn real observations (top row) and their corresponding autoencoder reconstructions (bottom row). The autoencoder can reconstruct the signals in the input data. We see that the reconstruction of the signal appears good, whereas the reconstruction of the noise is slightly poorer. This isn’t a concern since the main focus should be on the signal. Figure 2: A representation of our autoencoder model. The encoder shows a progressive compression of data, forcing the network to decide the most important features of the original spectrogram. condition (i). We can measure similarity in positions using metrics such as cosine similarity or Euclidean distance. The embeddings are visualized in Figure 4. The choice of sequence length determines how many frequency chunks the band is broken into. Our choice of 1000 bins (giving \(\sim 1\,\mathrm{MHz}\) chunks) is a compromise between small chunks (which increase computational resource intensity) and large chunks (which reduce accuracy). ### The Search Algorithm We now combine the components of our algorithm and use them to perform a search. The search has three elements. First, the feature extractor, using the encoder of the Autoencoder, extracts features from the spectrogram. Second, we index and construct the frequency embedding for each of the extracted features and add those to the encoded vectors. We repeat this process with the SOI. Finally, we compute similarity scores between the SOI and the images in the search list. This similarity score is the cosine similarity metric and is computed efficiently by performing a matrix multiplication and renormalizing by the respective vector norms. The procedure is visualized in Figure 5. ## 3 Results We perform three sets of tests. First, we apply the classical SIFT + BoW algorithm to the problem discussed in Section 1.2. Second, we apply our algorithm to the problem but without the frequency embedding step. Lastly, we apply our algorithm with frequency embeddings to the problem and assess the results, in order to demonstrate the modularity of our design. To assess the ability of the algorithm to return similar signals to those used as input, we select 10 filtered Energy Detection snippets at random and perform a search using the same observation set as used for the SOI observations. Images of the spectrograms used can be found in our GitHub repository3. Footnote 3: [https://github.com/PetchMa/Reverse_Radio_Search](https://github.com/PetchMa/Reverse_Radio_Search) In the first method, we use the classical SIFT + Bag of Words algorithm described in section 1.2. We run SIFT through each window of the spectrogram and generate a set of descriptor vectors for each image. Then we use K-Means clustering with 800 centroids to build a codebook. The codebook is built by taking the fitted centroids as the codewords and then looping back in the descriptor vectors and assigning them a codeword. Then for each image, we created a histogram of codewords, hence treating it as a BoW model. Finally, we performed a match between the histogram of the SOI and the histograms of each candidate signal. The matches use the K-Nearest Neighbours approach (Cover and Hart, 1967). Results are shown in Figure 6. One downside to this approach is that the algorithm is very slow. For each observation, we need to generate millions of descriptor vectors and repeat the SIFT operation millions of times. Unlike deep learning-based approaches, these feature vectors can be generated in parallel on the GPU using much more efficient methods. Next, we applied our algorithm _without_ using frequency embedding techniques (Figure 7). We see that this approach was able to retrieve more convincing candidates than the classical SIFT+ BoW approach. When we investigate _where_ the candidates lie in frequency space, we see that many are close to the frequency of the true signal (Figure 7). Finally, we applied our algorithm _with_ frequency embedding (Figure 8). The resulting signals show even higher visual similarity to the input signal. Additionally, the frequency distribution of the matches is closer to the frequency of the input signal, showing that the frequency embedding was successful. ## 4 Discussion Our algorithm is more successful than classical approaches in returning similar candidates. The algorithm also runs approximately 20 times faster than our classical implementations at our data center because our algorithm can take advantage of GPUs with deep learning hardware acceleration such as tensor cores. Traditional matching techniques suffer an inability to generalize large-scale features as discussed in Section 1.2. Furthermore, the addition of frequency embedding results in candidates that have a better match in frequency to the input signal, which is desirable under certain situations. In addition, it appears that SIFT-based algorithms tend to match local features and thus have a tendency to match noise (see Figure 6), unlike the Deep Learning based approaches where models were able to learn more global features resulting in better matches overall. Looking forward we believe this algorithm has a wide range of practical use in transient radio astronomy. One particularly exciting use case is to help automate signal verification steps for candidate technosignature signals such as BLC1 (Sheikh et al., 2021). By automatically searching for potential lookalikes based on signal morphology, rather than relying on more low-level parameters of a signal, such as drift rate, signal width, or signal-to-noise ratio, we can do a better job of distinguishing between true anomalies, and RFI. More generally, this technique can be used to build an RFI database. Many RFI databases consist simply of frequency ranges that are excluded (for example, the frequency ranges corresponding Figure 4: A visualization of the patterns in the embedding confirms that they are unique for each position we encode. We used a dimension of 512 and a sequence length of 100 for demonstration purposes; in the actual algorithm, we used a dimension of 4 and a sequence of 1000. Figure 5: Visualization of the search process and the flow of data. We first extract features from the SOI and the set of possible candidates. Then we apply frequency embedding, and finally, we produce the similarity scores by matrix multiplication. Figure 6: (a) The top 10 candidates with the closest similarity to the target [shown as the top left box], using the SIFT + BoW algorithm. The best candidate is shown in the top right box. Note: that this method did not find itself because this uses a KNN matching algorithm that excludes the trivial case (b) Frequency distribution of the top 10,000 most similar hits. Figure 7: (a) The top 10 candidates with the closest similarity to the target [shown as the top left box] for our deep learning algorithm with no frequency embedding. The best match is unsurprisingly, the input signal [top right box]. (b) Frequency distribution of the top 10,000 most similar hits. to GPS satellites). Our algorithm enables a finer-grained approach, comparing signal morphologies without necessitating the exclusion of large ranges in frequency, thus helping to make more efficient use of the spectrum and expanding the power of the search. Another use case is to perform template searches. For example, one can build theoretical models that simulate some desired signal, and search the recorded observations for signals that match. The next step in the development of our algorithm is to modify it to handle a wider range of data products. For example, developing a model to handle arbitrary _collections_ of spectrogram data (data from different receivers or even telescopes). Or perhaps building a model that can deal with cases where the spectral resolution varies (e.g. combining data products with 3 Hz resolution and 1 Hz resolution). We also plan to extend our approach to series (or cadences) of multiple observations, which intersperse scans of the target star with comparison scans of neighboring targets. Signals such as BLC1, which appear only in scans of the primary target, are consistent with being spatially localized on the sky. However, the dataset in which BLC1 was discovered also contained "lookalike" signals at other frequencies, indicating that they were likely due to particularly pernicious RFI. By applying our new methodology to cadences of data, we can much more easily locate lookalike signals at other frequencies, or in scans of other targets, providing an additional powerful means of screening technosignature candidates. ## 5 Data Release and Code Availability All source code is released here and the data is publicly released here. ## Acknowledgements Breakthrough Listen is managed by the Breakthrough Initiatives, sponsored by the Breakthrough Prize Foundation4. We are grateful to the staff of the Green Bank Observatory for their help with installation and commissioning of the Breakthrough Listen backend instrument and extensive support during Breakthrough Listen observations. We thank Yuhong Chen for his helpful discussion on the Energy Detection Algorithm. Footnote 4: [http://www.breakthroughinitiatives.org](http://www.breakthroughinitiatives.org)
2304.11315
Unmatched uncertainty mitigation through neural network supported model predictive control
This paper presents a deep learning based model predictive control (MPC) algorithm for systems with unmatched and bounded state-action dependent uncertainties of unknown structure. We utilize a deep neural network (DNN) as an oracle in the underlying optimization problem of learning based MPC (LBMPC) to estimate unmatched uncertainties. Generally, non-parametric oracles such as DNN are considered difficult to employ with LBMPC due to the technical difficulties associated with estimation of their coefficients in real time. We employ a dual-timescale adaptation mechanism, where the weights of the last layer of the neural network are updated in real time while the inner layers are trained on a slower timescale using the training data collected online and selectively stored in a buffer. Our results are validated through a numerical experiment on the compression system model of jet engine. These results indicate that the proposed approach is implementable in real time and carries the theoretical guarantees of LBMPC.
Mateus V. Gasparino, Prabhat K. Mishra, Girish Chowdhary
2023-04-22T04:49:48Z
http://arxiv.org/abs/2304.11315v1
# Unmatched uncertainty mitigation through neural network supported model predictive control ###### Abstract This paper presents a deep learning based model predictive control (MPC) algorithm for systems with unmatched and bounded state-action dependent uncertainties of unknown structure. We utilize a deep neural network (DNN) as an oracle in the underlying optimization problem of learning based MPC (LBMPC) to estimate unmatched uncertainties. Generally, non-parametric oracles such as DNN are considered difficult to employ with LBMPC due to the technical difficulties associated with estimation of their coefficients in real time. We employ a dual-timescale adaptation mechanism, where the weights of the last layer of the neural network are updated in real time while the inner layers are trained on a slower timescale using the training data collected online and selectively stored in a buffer. Our results are validated through a numerical experiment on the compression system model of jet engine. These results indicate that the proposed approach is implementable in real time and carries the theoretical guarantees of LBMPC. safety critical systems, deep learning, learning based MPC ## I Introduction Machine learning and Model Predictive Control (MPC) complement each other by compensating drawbacks of each other and making their combination useful for safety critical applications. We refer readers to [1, 2] for excellent surveys on safe learning and robotics. Learning based Model Predictive Control (LBMPC) [3] became popular due to its improvement over linear MPC in terms of transient response and overshoot with slight expense in processing time [4]. Several interesting applications such as autonomous driving [5, 6], heat, ventilation and air-conditioning systems [7], quad-copter [4], formation control [8], atmospheric pressure plasma jets [9], air-borne wind energy systems [10] etc., contributed in theoretical and practical advancements. Due to the significant success of and progress in deep learning techniques, this is tempting to use a neural network as an oracle in LBMPC. However, boundedness and differentiability of the oracle is required to hold the results of [3]. Even though a bounded and differentiable neural network can be constructed, the estimation of their weights in real time is generally difficult [11]. Since training of DNN is a time demanding process, real time implementation of DNN supported LBMPC is challenging. In this article, we demonstrate that the real time implementation of neural network based oracle is not only possible but also computationally efficient by two time scale training. In [12, 13], the training of the output layer and the hidden layers are separated in which the output layer is trained online through adaptation by considering the recently trained hidden layers as a feature basis function and the hidden layers are trained on a parallel machine by keeping the recently updated weights of the output layer fixed. We refer readers to excellent surveys on transfer learning [14, 15]. This method of two time scale training allows us to try different methods of training the output layer and hidden layers independently. In addition, different architectures such as Recurrent Neural Network and Convolutional Neural Network are also supported for the hidden layer. The theoretical guarantees mostly depend on last activation layer and the training mechanism of the linear output layer. The above methods [12, 13] are limited only to those uncertainties that enter into the dynamics through control channel. The present article extends the results of [12, 13] by generalizing the class of uncertainties. Our approach improves the performance of [3] by replacing the L2NW estimator by a DNN. Since PyTorch is popular for DNN implementation and CasAdi for MPC, we address some non-trivial issues related to their interface also. This article is organized as follows. The problem statement is given in SSII. The architecture of neural network and its Fig. 1: The neural network on the main loop with fixed hidden layers is connected with MPC and transmits the weights of output layer at time sequence \((T_{k})_{k\in\mathbb{Z}_{*}}\) to the neural network on the training thread. This neural network (on the training thread) transmits the weights of hidden layer to the neural network (on the main loop) at time instants \((t_{f})_{j\in\mathbb{Z}_{*}}\) after its \(j^{\text{th}}\) training. training mechanism are explained in SSIII. LBMPC and its properties are presented in SSIV and SSV, respectively. We validate our results in SSVI and conclude in SSVII. We let \(\mathds{R}\) denote the set of real numbers, \(\mathds{N}\) the set of non-negative integers and \(\mathds{Z}_{+}\) the set of positive integers. For a given vector \(v\) and positive (semi)-definite matrix \(M\succeq 0\), \(\left\|v\right\|_{M}^{2}\) is used to denote \(v^{\top}Mv\). For a given matrix \(A\), the trace, the largest eigenvalue and pseudo-inverse are denoted by \(\operatorname{tr}(A)\), \(\lambda_{\max}(A)\) and \(A^{\dagger}\), respectively. By notation \(\left\|A\right\|\) and \(\left\|A\right\|_{\infty}\), we mean the \(2-\)norm and \(\infty-\)norm when \(A\) is a vector; induced \(2-\)norm and \(\infty-\)norm when \(A\) is a matrix, respectively. A vector or a matrix with all entries \(0\) is represented by \(\mathbf{0}\) and \(I\) is the identity matrix of appropriate dimensions. We let \(M^{(i)}\) denote the \(i^{\text{th}}\) column of a given matrix \(M\). ## II Problem setup Let us consider a discrete time dynamical system \[x_{t+1}=Ax_{t}+Bu_{t}+h(x_{t},u_{t});\quad t\in\mathds{N}, \tag{1}\] 1. \(x_{t}\in\mathcal{X}\subset\mathds{R}^{d}\), \(u_{t}\in\mathds{U}\subset\mathds{R}^{m}\); \(d,m\in\mathds{Z}_{+}\), 2. \(h:\mathcal{X}\times\mathds{U}\rightarrow\mathds{R}^{d}\) is a continuous and (possibly) non-linear function, which represents state-action dependent unmatched uncertainty (or modeling error), 3. \(h(x,u)\in\mathds{W}\) for every \(x\in\mathcal{X}\) and \(u\in\mathds{U}\), 4. \(\mathcal{X},\mathds{U}\) and \(\mathds{W}\) are polytopes, 5. the matrix pair \((A,B)\) is stabilizable. The matrices \(A\) and \(B\) represent our domain knowledge or prior knowledge about the system dynamics, and the continuous function \(h\) represents the unknown component of the system dynamics1. LBMPC [3] has been developed to improve the closed-loop performance of tube-based robust MPC by modifying the cost function in the underlying optimization problem. The cost function is modified by learning (or estimating) the unknown function \(h\) with the help of data. In this article, we address issues associated with the use of DNN as an estimator of \(h\). Our main focus is to use DNN in such a way that it can be implemented in real time on a hardware with limited computational power. The general problem description of this article is as follows: Footnote 1: For example, when a non-linear dynamics is linearized by the matrix pair \((A,B)\), \(h\) represents the linearization error. **Problem Statement 1.** Present a stabilizing, robust and real-time implementable control framework for (1), which respects physical constraints (1-a), optimizes a given performance index, and reduces the effect of unmatched uncertainties by using a trainable DNN. ## III Deep neural network Any continuous function \(h\) on a compact set \(\mathcal{X}\times\mathds{U}\) can be approximated by a multi-layer network with number of layers \(L\succeq 2\) such that \[h(x,u)=W_{L}^{\top}\psi_{L}\left[W_{L-1}^{\top}\psi_{L-1}\left[\cdots[\psi_{ 1}(x,u)]\right]\right]+\varepsilon^{*}(x,u), \tag{2}\] where \(x\in\mathcal{X},u\in\mathds{U},\psi_{i},W_{i}\) for \(i=1,\ldots,L\), are the activation functions in the \(i^{\text{th}}\) layer and the corresponding ideal weights, respectively [16, SS7.1]. We can represent \(h(x_{t},u_{t})\) with the help of a neural network with a desired accuracy. Let us define \(\phi^{*}(x,u)\coloneqq\psi_{L}\left[W_{L-1}^{\top}\psi_{L-1}\left[\cdots[\psi_ {1}(x,u)]\right]\right]\) and \(W^{*}\coloneqq W_{L}\), then \[h(x_{t},u_{t})=W^{*\top}\phi^{*}(x_{t},u_{t})+\varepsilon^{*}(x_{t},u_{t}), \tag{3}\] where \(W^{*}\in\mathds{R}^{(n_{L}+1)\times d}\) denotes the weights of the output layer. There are \(n_{L}\) number of neurons in the last hidden layer. The first row of \(W^{*}\) represents the bias term in the output layer and the first element of \(\phi^{*}\in\mathds{R}^{n_{L}+1}\) is \(1\). The major challenge is associated with real-time implementation of DNN because its training takes much more time than the sampling interval of fast hardware like quad-copter. Therefore, we address this issue with the help of two DNN's as shown in Fig. 1. Both DNNs have same architecture as in Fig. 2. The DNN in the main loop is located on the main machine and the other DNN is located on some secondary (or remote) machine. We update the weights of the output layer on the main machine in real time at each time instant with the help of a weight update law while keeping the weights of hidden layers fixed. The hidden layers are trained on a secondary machine by using the approach [17] in which the weights of the output layer are copied from the main machine at the start of the training and remain fixed during the training. Once the training of DNN on a secondary machine is complete, new weights of hidden layers are updated on the main machine and remain fixed until the new set of weights are again obtained from the secondary machine. Training details about the output layer and hidden layers are provided in SSIII-A and SSIII-B, respectively. Let \((t_{j})_{j\in\mathds{Z}_{+}}\) represent an increasing time sequence of instants when weights of hidden layers are updated on main machine. Therefore, we get the following expression: \[h(x_{t},u_{t})=W^{*\top}\phi_{j}(x_{t},u_{t})+\varepsilon_{j}(x_{t},u_{t}), \tag{4}\] where \(\varepsilon_{j}(x_{t},u_{t})=W^{*\top}\left(\phi^{*}(x_{t},u_{t})-\phi_{j}(x_{t },u_{t})\right)+\varepsilon^{*}(x_{t},u_{t})\). We can assume that \(\left\|\phi_{j}(x,u)\right\|\) will be bounded for each \(j\), \(u\in\mathds{R}^{m}\) Fig. 2: The neural network architecture can use an arbitrary number of layers and neurons. The input layer has the same number of neurons as the experience data. The output layer has as many neurons as system states. The hidden layers can have any architecture provided suitable adjustments are made to facilitate their training. and \(x\in\mathds{R}^{d}\) due to the presence of bounded activation layer in Fig. 2 consisting bounded neurons, i. e. sigmoidal, tanh, etc. Since \(h\) is bounded due to (1-c), we can assume that ideal weights \(W^{*}\) in the output layer are also bounded. We make the following assumption: **Assumption 1**.: There exist \(\bar{W}_{i}>0\) for \(i=1,\ldots,d\), and \(\sigma,\bar{\epsilon}>0\) such that \(\left\|W^{*(i)}\right\|\leqslant\bar{W}_{i}\), for \(i=1,\ldots,d\), and \(\left\|\phi_{j}(x,u)\right\|\leqslant\sigma\) for every \(x\in\mathcal{X},u\in\mathds{U}\) and \(j\in\mathds{N}\). The above assumption is standard in literature [19, 12, 18]. If the neural network is not minimal then the ideal weights may not be unique. However, for the neural-adaptive controller design only the existence of ideal weights is assumed, which is always guaranteed when \(h\) is a continuous function on a compact set [16, SS7.1]. A priori knowledge about the bounds on the ideal weights \(W^{*}\) of the output layer is useful to avoid parameter drift phenomenon. At \(t_{0}=0\), weights of DNN on both machines are randomly initialized with desired bound on the output layer weights as per the Assumption 1. Therefore, for \(j\in\mathds{N}\), we have \[\hat{h}_{t}(x_{t},u_{t})\coloneqq K_{t}^{\top}\phi_{j}(x_{t},u_{t})\text{ for }t\in\{t_{j},t_{j}+1,\ldots,t_{j+1}-1\}. \tag{5}\] We update \(K_{t}\) in an unsupervised manner on main machine while collecting the training data for the DNN on secondary machine. In the next subsections we provide the relevant details of the training of DNN. ### _Adaptive learning of \(W^{*}\) on the main machine_ For \(t\in\{t_{j},t_{j}+1,\ldots,t_{j+1}-1\}\), the output of DNN is given by (5) and the bounded features are given by \(\phi_{j}(x_{t},u_{t})\) at time \(t+1\)2. We get the estimated state Footnote 2: This is important for implementation to note that \(u_{t}\) is computed at time \(t\) but we need to use \(\phi_{j}(x_{t},u_{t})\) and \(\hat{h}(x_{t},u_{t})\) at time \(t+1\). \[\hat{x}_{t+1}=Ax_{t}+Bu_{t}+\hat{h}_{t}(x_{t},u_{t}). \tag{6}\] We can compute the error \(\bar{x}_{t+1}=\hat{x}_{t+1}-x_{t+1}\). For the online training of the output layer, we employ the projection based robust weight update law and refer readers to [20, Chapter 10] for other methods. For a given learning rate \(0<\gamma<1\), we first modify \(K_{t}\) by taking \(\bar{x}_{t+1}\) into account to get \(\bar{K}_{t+1}\) and then project \(\bar{K}_{t+1}\) to ensure its boundedness as follows: \[\bar{K}_{t+1} =K_{t}-\gamma\frac{\phi_{j}(x_{t},u_{t})}{\left\|\phi_{j}(x_{t},u _{t})\right\|^{2}}\bar{x}_{t+1}^{\top}, \tag{7}\] \[K_{t+1}^{(t)} =\text{Proj}\bar{K}_{t+1}^{(t)}=\begin{cases}\bar{K}_{t+1}^{(t)}& \text{if }\left\|\bar{K}_{t+1}^{(t)}\right\|\leqslant\bar{W}_{i}\\ \frac{\bar{W}_{i}}{\left\|\bar{K}_{t+1}^{(t)}\right\|}\bar{K}_{t+1}^{(t)}& \text{otherwise}.\end{cases}.\] The implications of (7) are discussed in SSV. ### _Supervised learning of \(\phi^{*}\) on a secondary machine_ In this section, the training data and the loss function required for the supervised training of the DNN on secondary machine are explained. For a given state-action pair \((x_{t},u_{t})\) as input at time \(t\), the label \(h(x_{t},u_{t})\) is computed at time \(t+1\) by the relation: \[h(x_{t},u_{t})=x_{t+1}-Ax_{t}-Bu_{t}. \tag{8}\] These pairs \((x,u)\) and corresponding labels \(h(x,u)\) are stored in a training buffer until the buffer is full. In particular, a full buffer consists of \(\left((x^{i},u^{i}),h(x^{i},u^{i})\right)\) for \(i=1,\ldots,p_{\max}\). Once the buffer is full, new data is stored by replacing some old data. Several methods are proposed in the literature to increase the richness of the training buffer; see [21, 22, 23] and references therein. We refer readers to [24] for different methods of designing replay buffer. Let \((T_{k})_{k\in\mathcal{Z}_{u}}\) be an increasing time sequence. At time \(T_{k}\), the weight of output layer of the primary neural network are copied in the secondary neural network. During the training, the output layer weights in the secondary network remain fixed. Therefore, we are interested in finding the weights \(W_{1:L-1}\coloneqq W_{1},\ldots,W_{L-1}\) which minimize the following cost for a given input \((x,u)\) and label \(h(x,u)\): \[\ell\left((x,u),W_{1:L-1}\right)\coloneqq\\ \left\|h(x,u)-K_{T_{k}}^{\top}\psi_{L}\left[W_{L-1}^{\top}\psi_{L-1} \left[\cdots\left[\psi_{1}(x,u)\right]\right]\right]\right\|^{2}.\] Let \(M\) represent the number of training samples and \(\mathcal{D}_{k}\coloneqq\left((x^{i},u^{i}),h(x^{i},u^{i})\right)_{i=1}^{M}\) is training data consisting \(M\) data points randomly sampled from the buffer for the \(k^{\text{th}}\) training. The following loss function is considered for the training of DNN: \[\mathcal{L}(\mathcal{D}_{k},W_{1:L-1})=\frac{1}{M}\sum_{i=0}^{M}\ell\left((x^{i },u^{i}),W_{1:L-1}\right).\] ## IV Model predictive controller We first fix an optimization horizon \(N\in\mathds{Z}_{+}\). Let \(x^{r},u^{r}\) be a reference (or equilibrium) state-action pair. For given positive definite matrices \(Q,R>0\), \(P>0\) is the solution of the following Lyapunov equation: \[(A+BK)^{\top}P(A+BK)-P=-(Q+K^{\top}RK), \tag{9}\] where \(K\) is such that \(A+BK\) is Schur stable. We define the following cost \[\psi\left(z_{0:N+1},u_{0:N}\right)\coloneqq\left\|z_{N}-x^{r}\right\|_{P}^{2}+ \sum_{i=0}^{N-1}\left\|z_{i}-x^{r}\right\|_{Q}^{2}+\left\|v_{i}-u^{r}\right\|_{R }^{2}.\] Let us define \(R_{i+1}=(A+BK)R_{i}\oplus\mathds{W}\) with \(R_{0}=\{0\}\). For \(i=0,\ldots,N-1\), we impose the following constraints: \[\bar{z}_{i+1} =A\bar{z}_{i}+Bv_{i} \tag{10}\] \[\bar{z}_{i} \in\mathcal{X}\odot R_{i},\quad v_{i}\in\mathds{U}\odot KR_{i}\] \[\bar{z}_{N} \in\Omega\odot R_{N},\] where \(\Omega\) is a disturbance invariant set. At each time \(t\), we measure the state \(x_{t}\) of the system (1) and solve the following optimization problem: \[\min_{(c_{i})_{i=0}^{N-1}} \psi\left(z_{0:N+1},v_{0:N}\right)\] s. t. \[z_{0}=\bar{z}_{0}=x_{t}\] \[v_{i} =K\bar{z}_{i}+c_{i}\text{ for }i\in\mathds{Z}_{[0,N-1]}\] (11) \[z_{i+1} =Az_{i}+Bv_{i}+\hat{h}_{t}(z_{i},v_{i})\text{ for }i\in\mathds{Z}_{[0,N-1]}\] (10). By solving the above problem, we get \(v_{0}=Kx_{t}+c_{0}\). We set \(u_{t}=v_{0}\) and apply \(u_{t}\) to the system (1). ## V Properties of LBMPC For the purpose of analysis, we define \(\tilde{K}_{t}:=K_{t}-W^{*},\tilde{x}_{t+1}:=\tilde{x}_{t+1}-x_{t+1}=\tilde{h}_{t}( x_{t},u_{t})-h(x_{t},u_{t})=\tilde{K}_{t}^{\top}\phi_{J}(x_{t},u_{t})-\varepsilon_{J}(x _{t},u_{t})\) and \(\tilde{W}:=\sum_{t=1}^{m}\tilde{W}_{t}^{2}\). We recall the following definition: **Definition 1** ([25], page 117).: The vector sequence \((s_{t})_{t\in\mathbb{N}}\) is called \(\mu\) small in mean square sense if it satisfies \(\sum_{t=k}^{k+N-1}\|s_{t}\|^{2}\leqslant Nc_{0}\mu+c_{0}^{\prime}\) for all \(k\in\mathbb{Z}_{+}\), a given constant \(\mu\geqslant 0\) and some \(N\in\mathbb{Z}_{+}\), where \(c_{0},c_{0}^{\prime}\geqslant 0\). We make the following assumption: **Assumption 2**.: There exists \(\bar{\varepsilon}>0\) such that \[\left\|\varepsilon_{j}(x,u)\right\|\leqslant\bar{\varepsilon}\,\,\,\text{for each}\,\,\,(x,u)\in\mathcal{X}\times\mathbb{U}\,\,\text{and}\,\,\,j\in\mathbb{N}.\] Since \(h\) is bounded due to (1-c), \(W^{*},\phi_{j}\) due to Assumption 1 and \(K_{t}\) due to (7), the above assumption is trivially satisfied. We have the following result that says that the estimation error \(\tilde{x}_{t+1}=\hat{h}_{t}(x_{t},u_{t})-h(x_{t},u_{t})\) is small in mean square sense. **Lemma 1**.: _Consider the dynamical system (1), weight update law (7). Let the assumptions 1 and 2 hold. Let us define \(V_{a}(K_{t}):=\frac{1}{\gamma}\operatorname{tr}(\tilde{K}_{t}^{\top}\tilde{K} _{t})\). Then for all \(t\),_ 1. \(V_{a}(K_{t})\leqslant\frac{4}{\gamma}\tilde{W}\)_,_ 2. \(V_{a}(K_{t+1})-V_{a}(K_{t})\leqslant-\frac{1-\gamma}{\sigma^{2}}\left\|\tilde{ x}_{t+1}\right\|^{2}+\left\|\varepsilon_{j}(x_{t},u_{t})\right\|^{2}\)_,_ 3. \(\tilde{x}_{t}\)_is_ \(\bar{\varepsilon}^{2}\) _small in mean square sense with_ \(c_{0}=\frac{\sigma^{2}}{1-\gamma}\) _and_ \(c_{0}^{\prime}=\frac{4c_{0}}{\gamma}\tilde{W}\) _as per the Definition_ 1_._ Proof.: (Proof of Lemma 1) 1. Since \(V_{a}(K_{t})=\frac{1}{\gamma}\operatorname{tr}(\tilde{K}_{t}^{\top}\tilde{K} _{t})=\frac{1}{\gamma}\sum_{i=1}^{m}\left\|K_{t}^{(i)}-W^{*(i)}\right\|^{2} \leqslant\frac{2}{\gamma}\sum_{i=1}^{m}\left\|K_{t}^{(i)}\right\|^{2}+\left\| W^{*(i)}\right\|^{2}\leqslant\frac{4}{\gamma}\sum_{i=1}^{m}\tilde{W}_{t}^{2}= \frac{4}{\gamma}\tilde{W}\). 2. By substituting \(\tilde{K}_{t+1}=\tilde{K}_{t+1}-W^{*}+K_{t+1}-\tilde{K}_{t+1}\) in \(V_{a}(K_{t+1})\) and defining \[\alpha_{t} \coloneqq(K_{t+1}-\tilde{K}_{t+1})^{\top}(K_{t+1}-\tilde{K}_{t+1})\] \[\quad+2(K_{t+1}-\tilde{K}_{t+1})^{\top}(\tilde{K}_{t+1}-W^{*})\] \[=-(K_{t+1}-\tilde{K}_{t+1})^{\top}(\tilde{K}_{t+1}-\tilde{K}_{t+1})\] \[\quad+2(K_{t+1}-K_{t+1})^{\top}(K_{t+1}-W^{*}),\] we get \[V_{a}(K_{t+1}) =\frac{1}{\gamma}\operatorname{tr}(\tilde{K}_{t+1}^{\top}\tilde{K }_{t+1})\] \[=\frac{1}{\gamma}\operatorname{tr}\left((\tilde{K}_{t+1}-W^{*})^{ \top}(\tilde{K}_{t+1}-W^{*})\right)+\frac{1}{\gamma}\operatorname{tr}\left( \alpha_{t}\right).\] One important property of the projection is the following [25, (4.61)]: \[(W^{*(i)}-K_{t}^{(i)})^{\top}(\tilde{K}_{t}^{(i)}-K_{t}^{(i)})\leqslant 0 \,\,\text{for each}\,\,i=1,\ldots,m.\] (12) Since \((K_{t+1}^{(i)}-\tilde{K}_{t+1}^{(i)})^{\top}(K_{t+1}^{(i)}-W^{*})\leqslant 0\) due to (12), we can ensure \(\operatorname{tr}(\alpha_{t})\leqslant 0\). Therefore, \[V_{a}(K_{t+1})\leqslant\frac{1}{\gamma}\operatorname{tr}\left(( \tilde{K}_{t+1}-W^{*})^{\top}(\tilde{K}_{t+1}-W^{*})\right)\] \[=V_{a}(K_{t})+\frac{1}{\left\|\phi_{J}(x_{t},u_{t})\right\|^{2}} \operatorname{tr}\left(\tilde{x}_{t+1}\tilde{x}_{t+1}^{\top}\right)\] \[\quad-\frac{1}{\left\|\phi_{J}(x_{t},u_{t})\right\|^{2}} \operatorname{tr}\left(\tilde{K}_{t}^{\top}\phi_{J}(x_{t},u_{t})\tilde{x}_{t+1} ^{\top}+\tilde{x}_{t+1}\phi_{J}(x_{t},u_{t})^{\top}\tilde{K}_{t}\right).\] By substituting \(\tilde{K}_{t}^{\top}\phi_{J}(x_{t},u_{t})=\tilde{x}_{t+1}+\varepsilon_{j}(x_{t}, u_{t})\) in the above inequality, we get \[V_{a}(K_{t+1})\leqslant V_{a}(K_{t})+\frac{1}{\left\|\phi_{J}(x_{t},u_{t}) \right\|^{2}}\left(\gamma\left\|\tilde{x}_{t+1}\right\|^{2}-2\operatorname{tr}( \tilde{x}_{t+1}\] \[\quad+\varepsilon_{j}(x_{t}))\tilde{x}_{t+1}^{\top}\right)\] \[=V_{a}(K_{t})+\frac{1}{\left\|\phi_{J}(x_{t},u_{t})\right\|^{2}} \left(\gamma\left\|\tilde{x}_{t+1}\right\|^{2}-2\tilde{x}_{t+1}^{\top} \varepsilon_{j}(x_{t},u_{t})\right)\] \[\leqslant V_{a}(K_{t})+\frac{1}{\left\|\phi_{J}(x_{t},u_{t}) \right\|^{2}}\left((\gamma-1)\left\|\tilde{x}_{t+1}\right\|^{2}+\left\| \varepsilon_{j}(x_{t},u_{t})\right\|^{2}\right)\] \[=V_{a}(K_{t})-\frac{1-\gamma}{\left\|\phi_{J}(x_{t},u_{t})\right\|^{2 }}\left\|\tilde{x}_{t+1}\right\|^{2}+\left\|\varepsilon_{j}(x_{t},u_{t}) \right\|^{2}\] \[\leqslant V_{a}(K_{t})-\frac{1-\gamma}{\sigma^{2}}\left\|\tilde{x}_{t+ 1}\right\|^{2}+\left\|\varepsilon_{j}(x_{t},u_{t})\right\|^{2},\] where the last inequality is due to \(1\leqslant\left\|\phi_{J}(x_{t},u_{t})\right\|^{2}\leqslant\sigma^{2}\). Therefore, \[V_{a}(K_{t+1})-V_{a}(K_{t})\leqslant-\frac{1-\gamma}{\sigma^{2}}\left\|\tilde{x}_{t+ 1}\right\|^{2}+\left\|\varepsilon_{j}(x_{t},u_{t})\right\|^{2}.\] 3. Consider Lemma 1-(ii) to get \[\frac{1-\gamma}{\sigma^{2}}\left\|\tilde{x}_{t+1}\right\|^{2} \leqslant-V_{a}(K_{t+1})+V_{a}(K_{t})+\left\|\varepsilon_{j}(x_{t},u_{t}) \right\|^{2}\] \[\leqslant-V_{a}(K_{t+1})+V_{a}(K_{t})+\bar{\varepsilon}^{2}.\] By summing from \(t=k\) to \(k+N-1\) in both sides, we get \[\frac{1-\gamma}{\sigma^{2}}\sum_{t=k}^{k+N-1}\|\tilde{x}_{t+1}\|^{2} \leqslant V_{a}(K_{k})+N\bar{\varepsilon}^{2},\] \[\leqslant\frac{4}{\gamma}\tilde{W}+N\bar{\varepsilon}^{2}.\] Therefore, \(\tilde{x}_{t}\) is \(\bar{\varepsilon}^{2}\) small in mean square sense with \(c_{0}=\frac{\sigma^{2} Theorem 2]. The second result is a direct implication of [27, Lemma 3.5]. The recursive feasibility of (11) is due to [3, Theorem 1]. In particular, if \((c_{0},c_{1},\ldots,c_{N-1})\) is an optimizer of (11) at some time \(t\) then \((c_{1},\ldots,c_{N-1},0)\) will be a feasible solution at time \(t+1\). ## VI Numerical experiment We consider the compression system of a jet engine, which exhibits instabilities due to rotating stall and surge. The Moore-Greitzer compressor model is given by the following nonlinear dynamics: \[\dot{z} =-y+z_{c}+1+\frac{3}{2}z-\frac{1}{2}z^{3} \tag{13}\] \[\dot{y} =\frac{1}{\beta^{2}}\left(z+1-r\sqrt{z}\right),\] where \(z\) is mass flow, \(y\) is pressure rise, \(\beta>0\) is a constant and \(r\) is the throttle opening. Similar to [3], we assume that \(r\) is controlled by a second order actuation with transfer function \(r(s)=\frac{\omega_{n}^{2}}{s^{2}+2\zeta\omega_{n}s+\omega_{n}^{2}}u(s)\), where \(u\) is the input. Our simulation parameters are \(\beta=1,z_{c}=0,\zeta=\frac{1}{\sqrt{2}},\omega_{n}=10\sqrt{10}\). We have the following constraints: \[z \in[0,1] y\in[1.1875,2.1875] \tag{14}\] \[r \in[0.1547,2.1547] \dot{r}\in[-20,20]\] \[u \in[0.1547,2.1547]\] The state \(x\) of the system is represented by \(x=\begin{bmatrix}z&y&r\end{bmatrix}^{\top}\in\mathbb{R}^{4}\). The nonlinear model (13) is linearized around the equilibrium point \(x_{e}=[0.5&1.6857&1.1547&0]^{\top}\) and discretized with sampling time \(T=0.05\) seconds. We chose \(T=0.05\) to make it slightly larger than the solver time in one optimization problem corresponding to linear MPC for its online implementation. We use a learning rate of \(0.001\) to train the deep neural network in parallel. We employed CasADI for the symbolic representation of the underlying optimization problem of MPC (11). This symbolic representation is done offline and therefore, helpful in online implementation of MPC by updating only the measured state. Since PyTorch has inbuilt tools for the training of neural networks, we want to use it along with CasADI. We design a neural network on main machine with the help of CasADI and that on the secondary machine by PyTorch. Another difficulty is associated with the computation of terminal set \(\Omega\) and reachable sets \(R_{i}\)'s. The computation of \(R_{i}\) is very hard for large \(i\). MATLAB based tools like MPT are not available in python for polytopic manipulation. The python library pytone has very limited capability and is under development. Therefore, we followed the approach of [28]. Let the set \(\mathcal{X}\ominus\mathrm{W}\), \(\mathcal{X}\) and \(\mathrm{U}\) be given by \[\mathcal{X}\ominus\mathrm{W} \coloneqq\{x\in\mathbb{R}^{d}\mid F_{p}x\leqslant h_{p}\}\] \[\mathcal{X} \coloneqq\{x\in\mathbb{R}^{d}\mid F_{xx}\leqslant h_{x}\}\] \[\mathrm{U} \coloneqq\{x\in\mathbb{R}^{m}\mid F_{ux}\leqslant h_{u}\},\] where \(F_{p},F_{x},F_{u}\) are suitable matrices and \(h_{p},h_{x},h_{u}\) are suitable vectors required to provide the half-space representations of above sets. The set \(\Omega\) and \(R_{i}\) are approximated in [28]. We are interested in comparing our proposed approach with the state-of-the-art [3], which employs L2NW as an estimator. We implemented L2NW with the help of CasADI in terms of symbolic variables. One of the inputs for L2NW is a data buffer of fixed size, used as function parameters to learn the uncertainties. In an online implementation, this data buffer is empty at the beginning and its size increases with time. Therefore, we initialize a fixed size buffer in such a way that default values do not affect the outcome of the L2NW estimator. Our simulation parameters are same as in [3] except the sampling time \(T\). Figures 3 and 4 demonstrate that the proposed approach has faster transient response than that of linear MPC and smaller overshoot than that of [3]. It is demonstrated in Fig. 5 that the proposed approach and linear MPC both have comparable solver time but that in [3] is larger than the sampling time at the beginning. Only drawback in [3] is the use of L2NW estimator, which makes the underlying optimization problem of LBMPC computationally demanding and therefore hard to implement on a small machines. Since the approach [3] gives choice to use any estimator, authors used linear estimators in [4, 28] instead of L2NW estimator. Our approach of using DNN as an estimator is not only computationally less demanding due to the parallel processing but surprisingly the underlying optimization problems have similar solver time as linear MPC (Fig. 5). Therefore, the proposed approach can be implemented on small machines. ## VII Conclusion This article demonstrates that DNN can be used in the framework of LBMPC by dual time scale training and using bounded neurons in the last activation layer. The proposed approach is able to provide a faster solution because a DNN can be trained on a separate machine. Since LBMPC allows any estimator as an oracle, different DNN architectures can be investigated for different class of problems by following the proposed approach. Some interesting extensions may be possible along the lines of vision based navigation [29], stochastic MPC [30, 31], Bayesian Neural Network [32] and crystallization processes [33, 34]. Fig. 3: Proposed approach and [3] both have faster transient response than that of linear MPC but the proposed approach has smaller overshoot than that of [3].
2305.09101
Automatic learning algorithm selection for classification via convolutional neural networks
As in any other task, the process of building machine learning models can benefit from prior experience. Meta-learning for classifier selection gains knowledge from characteristics of different datasets and/or previous performance of machine learning techniques to make better decisions for the current modeling process. Meta-learning approaches first collect meta-data that describe this prior experience and then use it as input for an algorithm selection model. In this paper, however, we propose an automatic learning scheme in which we train convolutional networks directly with the information of tabular datasets for binary classification. The goal of this study is to learn the inherent structure of the data without identifying meta-features. Experiments with simulated datasets show that the proposed approach achieves nearly perfect performance in identifying linear and nonlinear patterns, outperforming the traditional two-step method based on meta-features. The proposed method is then applied to real-world datasets, making suggestions about the best classifiers that can be considered based on the structure of the data.
Sebastian Maldonado, Carla Vairetti, Ignacio Figueroa
2023-05-16T01:57:01Z
http://arxiv.org/abs/2305.09101v1
# Automatic learning algorithm selection for classification via convolutional neural networks ###### Abstract As in any other task, the process of building machine learning models can benefit from prior experience. Meta-learning for classifier selection gains knowledge from characteristics of different datasets and/or previous performance of machine learning techniques to make better decisions for the current modeling process. Meta-learning approaches first collect meta-data that describe this prior experience and then use it as input for an algorithm selection model. In this paper, however, we propose an automatic learning scheme in which we train convolutional networks directly with the information of tabular datasets for binary classification. The goal of this study is to learn the inherent structure of the data without identifying meta-features. Experiments with simulated datasets show that the proposed approach achieves nearly perfect performance in identifying linear and nonlinear patterns, outperforming the traditional two-step method based on meta-features. The proposed method is then applied to real-world datasets, making suggestions about the best classifiers that can be considered based on the structure of the data. Meta-learning, Meta-features, Algorithm selection, Convolutional networks. ## 1 Introduction Deep neural networks (DNNs) have shown extraordinary advances in recent years due to their ability to collect and process large volumes of data [12, 33]. Their well-deserved popularity has led to remarkable methodological developments and a broad spectrum of new applications [4, 12]. In this paper, we propose the use of deep neural networks for meta-learning. This task is typically described as "learning to learn" in the sense that knowledge is extracted from datasets and machine learning algorithms, and then used to alleviate the challenges that face the current learning process [3, 19, 25].1 Meta-learning is a broad field of machine learning with its origin some decades before the rise of deep learning. Some primary studies in this field learned from task properties, constructing "meta-features" that represent the datasets. The goal is then to transfer insights from the most similar tasks to a new task [3]. The reasoning behind meta-learning has been extended to other relevant challenges associated with deep neural networks, such as transfer learning or multitask learning [30]. These challenges, however, are not related to this paper, which focuses on learning algorithm selection for classification [3, 10]. This paper proposes the automatic extraction of meta-features using convolutional neural networks (CNNs). These techniques are well-known DNN architectures that are designed to learn from images, among other data sources [4, 12]. Instead of inputting images to a CNN, we consider simulated tabular datasets with distinguishable linear and nonlinear patterns, defining a multiclass task. The proposed strategy represents a novel approach to meta-learning and algorithm selection. Specifically, it involves leveraging deep learning techniques to learn meta-features from simulated patterns, which can then be applied in the algorithm selection process. To the best of our knowledge, this is the first time such an approach has been proposed. We are confident that this methodology constitutes a significant contribution to the field, opening an interesting research line. This study seeks to answer the following questions: (1) can we learn linear and nonlinear data structures from raw tabular datasets using deep learning? and (2) if yes, can we leverage this knowledge to make suggestions on the most suitable machine learning method or methods for a specific task? Our study is consistent with the reasoning behind the "no free lunch" (NFL) theorem in the sense that it may not be effective to gain knowledge from tasks to apply it to a completely unrelated dataset. However, existing meta-learning studies have shown the advantages of learning from prior experience [27]. This paper is organized as follows. Section 2 provides an overview of relevant meta-learning studies. The proposed meta-learning framework is presented in Section 3. Section 4 discusses the results obtained using simulated and benchmark datasets. Finally, Section 5 summarizes the primary conclusions and presents possible directions for future developments. ## 2 Prior Work on Meta-learning for Classifier Selection Meta-learning is an umbrella term that encompasses several approaches that gain experience across tasks. The goal of meta-learning is to avoid time-consuming "trial and error" processes and make better choices when estimating machine learning models. Some examples of these approaches include the following (see [27] for a comprehensible literature review): * _Learning from previous model estimations and evaluations:_ Given a set of possible configurations \(\Theta\), the goal of this approach is to train a meta-learner based on previous model evaluations to make recommendations on a new task. In a hyperparameter search task, for example, the accuracy of a classifier can be estimated on different datasets (or variants of the same datasets using bootstrapping or cross-validation), and a meta-learner can be used to suggest an optimal hyperparameter configuration (or a set of candidate configurations, called a _portfolio_[27]). Note that a configuration can be a set of hyperparameters and network architectures or pipeline components [27]. * _Transfer learning:_ The reasoning behind this approach is to consider models that are trained with data from one or more sources as a starting point for developing a new model on a similar task. Although this idea has been applied to traditional machine learning methods, this approach has been particularly successful in deep learning (DL) [27]. In tasks such as object recognition and natural language processing (NLP), DL requires a large amount of data to achieve superior performance in relation to other methods. A _pretrained model_ can be constructed with publicly available data (e.g., Wikipedia or books in the case of NLP tasks [20] or ImageNet in the case of visual object recognition [18]), which is then adapted to the new task via _fine-tuning_. * _Learning from meta-features:_ The goal of this approach is to make better decisions during machine learning by learning from meta-features or properties that describe the datasets. These characterizations can be used for hyperparameter selection [16, 19] or classifier selection [3, 10], among other tasks. A task similarity metric can be defined to transfer knowledge from one task to a new similar task, or a meta-learner can be constructed [2]. For example, in [3], a decision tree learner is trained on several meta-features that were constructed on more than 100 datasets from the UCI Repository [5] and other sources. A multiclass classification problem with five classes related to five different classification algorithms is defined. Alternatively, information on dataset characteristics can be useful to infer the performance of feature selection methods [21]. The method proposed in this study is related to the latter approach in the sense that we learn from characterizations of other tasks to make recommendations of suitable classifiers. However, instead of using a two-step approach by first constructing meta-features and then estimating a meta-learner, the proposed meta-learner is fed with tabular datasets directly. We can distinguish different purposes in the creation of meta-features in the sense that specific measures aim to identify specific patterns, such as feature interdependence, class overlap, or task similarity: * _Feature normality and dispersion:_ Statistical measures used to describe a distribution can be considered to assess normality in the covariates. Some common examples are skewness \(\frac{E(X-\mu)^{3}}{\sigma^{3}}\) and kurtosis (\(\frac{E(X-\mu)^{4}}{\sigma^{4}}\)), with \(\mu\) and \(\sigma\) being the mean and standard deviation of the variable, respectively [28]. Other measures that can be related to feature normality are the interquartile range (IQR, the difference between quartile 3 and quartile 1) and the value of the 90th quantile [3]. Other statistics that can assess tendency and dispersion are the arithmetic mean, the geometric mean, the harmonic mean, the trimmed mean, the standard deviation, and the mean absolute deviation (MAD, \(\frac{\sum|x_{i}-\overline{x}|}{n}\)) [3]. Finally, the Index of Dispersion (ID) indicates whether the data points are scattered or clustered [3]. * _Complexity:_ Simple measures can be an indicator of the complexity of the learning tasks, such as the size of the dataset (number of rows and columns), and the number of classes. These measures can be related to the expected training times [17]. Alternatively, the percentage of missing values and outliers can be related to the complexity of the preprocessing step. [3, 24]. * _Feature interdependence:_ The level of redundancy in a dataset can be assessed using the Pearson correlation (\(\rho=\frac{\sigma_{xy}}{\sigma_{x}\sigma_{y}}\)) or by analyzing the eigenvalues that result from applying the principal component analysis (PCA) method for feature extraction [17]. Some PCA-based meta-features are canonical correlations (square root of the eigenvalues), the first and last PC [3], and the skewness and kurtosis of the first PC [6]. * _Class overlap and feature relevance:_ The meta-learning literature has reported different metrics to evaluate overall feature relevancy, such as the center of gravity (Euclidean norm between minority and majority classes) [3], the entropy of classes (\(H(C)=-\sum_{i}\pi_{i}\log\pi_{i}\)), the mean mutual information (MMI), the equivalent number of variables (ENV, ratio between the class entropy and the MMI), and the noise-signal ratio (\(NSR=\frac{H(X)-M(C,X)}{M(C,X)}\)) [3]. A large value for the latter metric measure suggests that the dataset contains several irrelevant noisy variables, and therefore, its dimensionality can be reduced without affecting the classification accuracy [3]. * _Landmarks:_ This strategy computes measures designed to assess task similarity, using the learners themselves to describe the dataset. The idea is to compare differences in terms of performance between configurations and/or datasets using simple classifiers, such as 1-NN or naive Bayes [7]. It is important to notice that, although meta-learning have been usually applied for algorithm recommendation in classification [29, 35]. Other machine learning tasks in which meta-learning has shown good results include regression [15], time series analysis [13], data stream mining [23], and clustering [7, 11, 22]. ## 3 Proposed Classifier Selection Framework via CNNs This paper proposes a novel meta-learning framework in which tabular datasets are introduced in a learning algorithm directly, which differs from the traditional two-step approach that involves the construction of meta-features. In this sense, each tabular dataset is treated as an "image", using CNNs for model training. The reasoning behind this approach is to avoid the loss of information that occurs when meta-features are constructed and subsequently used as inputs for a traditional classifier, such as decision trees [3]. Our framework consists of four steps: 1. **Simulation of patterns:** We define different training patterns that can be related to the performance of learning algorithms. These patterns can be simulated under different noise and class overlap conditions, and then used to feed a CNN in a supervised manner. For example, a logistic regression or a linear support vector machine (SVM) can be recommended when dealing with a tabular dataset with a linear pattern. The most suitable classifier is a hyperplane, while kernel-based SVM, NN, or random forests can be recommended when the dataset at hand follows a nonlinear pattern. 2. **Dealing with inputs of different sizes:** Because the proposed goal is to make recommendations for real-world tabular datasets, we expect that datasets are all of different sizes. Therefore, we must reshape the training and test samples to a homogeneous size. We propose simple approaches for dealing with images of different sizes with CNNs, such as padding and principal component analysis (PCA). 3. **CNN training:** Often used for image-related tasks such as pattern recognition, segmentation, or object detection, CNNs are among the most popular deep learning variants [31]. CNNs are used in this framework because they outperform other methods for image recognition due to their ability to manage tensor data without the need for an additional feature extraction step [9, 34]. 4. **Application to real-world datasets:** The final step consists of applying the CNN model to real-world datasets to identify one of the simulated patterns that was considered during training. We can link the pattern found to one or more suitable classifiers for this pattern. It is important to assess the confidence of the classifier on its choice, and therefore, we must analyze the probabilistic output of the network using a softmax function. For example, we consider a learning task with five simulated patterns. If the largest predicted probability for a given real-world dataset is 0.3, then the model is undecided, and no recommendation can be made because this probability is too near a random choice (0.2 for a 5-class task). In contrast, the model is certain when the probability of a given class is near 1, leading to a trustworthy recommendation. The first step is arguably the most challenging because only a comprehensible set of patterns under different conditions of noise would lead to an adequate application on real-world datasets. The success of the approach strongly relies on this step. As a first attempt, we discuss five different patterns that are well known in the machine-learning literature, such as the two moons or XOR patterns [32]. We believe that this paper opens an exciting new line of research in which different sets of simulated patterns can be designed. Regarding the second step, padding is used to resize tabular datasets that are smaller than a target size by extending the area across which a CNN processes. This process is performed by introducing additional rows or columns of zeros [1]. In contrast, PCA can reduce tabular datasets to a target size by finding (orthogonal) linear combinations of the original variables in such a way that the variance is maximized. Thus, we can shrink the datasets while keeping their inherent structure [14]. For the third step, we consider standard CNN architectures with Conv2D layers, which computes two-dimensional convolutions between the inputs and a matrix of weight vectors called kernels. The rectified linear unit (ReLU) is used as the activation function, which allows adequate convergence for the weight updating process, reducing the risk of the vanishing/exploding gradient problem [1]. Additionally, dropout layers are used in the network to prevent overfitting by removing hidden units from a specific layer section, setting them to zero [1]. After the convolutional and dropout layers, a flattened layer was considered to collapse the two-dimensional inputs into a one-dimensional vector. Dense (fully connected) layers were subsequently included, finishing with the softmax output layer [31]. The optimization process considered \(l_{1}\) regularization for the weights together with the minimization of the loss function (multiclass cross-entropy loss). We emphasize on the fact that the approach provides an **automatic learning algorithm selection model for classification** in the sense that, once the CNN model is constructed, it can be applied directly to any tabular dataset to suggest a classifier based on its characteristics. The first three steps are performed only once, and are not required for the application of the model. We plan to make the CNN model publicly available in order that the community can use it and improve it, similar to other pre-trained deep learning models. ## 4 Experimental Results To validate the proposed meta-learning framework, we first trained CNN models with simulated data that had different linear and nonlinear patterns, and compared the performances with the traditional approach based on meta-features proposed in [3]. The results of these experiments are shown in Section 4.1. The application of CNNs with real-world tabular datasets is reported in Section 4.2. ### Construction of the Classifier Selection Model We simulated a total of 50,000 tabular datasets with five different patterns (10,000 datasets each). Each dataset consisted of tuples in the form \(\{(\mathbf{x}_{1},y_{1}),\ldots,(\mathbf{x}_{m},y_{m})\}\), where \(\mathbf{x}_{i}\in\mathbb{R}^{n}\) and \(y_{i}=\{-1,1\}\) for \(i=1,\ldots,m\). The supervised learning task consists of predicting the right pattern of the datasets (i.e., a multiclass problem), and the inputs are two-class datasets of size \(m=1000\), in which the two classes are perfectly balanced. Five two-dimensional patterns that act as classes for the meta-learner were defined: a linear pattern (C1), an XOR pattern (C2), a two-moons pattern (C3), a "sandwich" pattern (C4), and a quadratic pattern (C5). The following two-dimensional patterns were defined: - linear pattern:** We combine two Gaussian functions, one for each class, varying the mean and standard deviation of each Gaussian to create different overlap and noise conditions. * XOR pattern:** We combine four Gaussian functions, two for each class, varying the mean and standard deviation of each Gaussian to create different overlap and noise conditions. The four Gaussians shape the well-known XOR pattern related to the exclusive disjunction logical operation. This nonlinear pattern is also known as "checkerboard". * "Two moons" pattern:** This well-known synthetic dataset constructs a swirl pattern that is shaped like two moons. We vary the conditions in terms of noise and overlap. * "Sandwich" pattern:** We simulate samples using one Gaussian function interspersed with two other Gaussians related to a second class. This results in a nonlinear pattern in the form of a "sandwich". We vary the mean and standard deviation of each Gaussian distribution to create different overlap and noise conditions. * Quadratic pattern:** We simulate samples based on two quadratic functions, one for each class, with a marginal overlap. We vary the conditions in terms of noise and overlap. Figure 1 shows the five simulated patterns. To highlight the performance differences of the various patterns, we used several well-known classifiers, evaluating performance on a test set of simulated tabular datasets. The following classification methods and their respective hyperparameter values were considered: * _Logistic regression (Logit):_ This statistical method is suitable for a linear pattern because it constructs a separating hyperplane. The slopes of this function are given by a vector of coefficients \(\beta\). Figure 1: Simulated patterns for the meta-learning framework. * _Decision tree (DT)_: This method performs a branching process in a hierarchical manner until a pruning criterion is met. The Gini index was considered for splitting, while three different values were considered for the complexity parameter (default pruning parameter values of the 'part' R implementation) \(cp=\{0,0.03,0.81\}\). * _\(k\)-nearest neighbor (\(k\)-NN)_: This classifier predicts new samples according to the labels of a neighborhood of size \(k\) that were obtained from the training set, with \(k=\{5,7,9\}\). * _Random forest (RF)_: This method constructs an ensemble of decision trees via bagging. The same criteria for the DT classifier were considered, constructing 500 trees with two randomly selected variables. * _Artificial neural network (ANN)_: A shallow ANN classifier with one hidden layer was constructed. We explored 1, 3, and 5 hidden units and the following values for the weight decay parameter: \(\epsilon=\{0,0.0001,0.1\}\). * _Support vector machine (SVM)_: We considered the standard kernel-based SVM model with Tikhonov regularization and hinge loss. We explored the following values for the regularization parameter: \(C=\{0.25,0,5,1,2,4\}\), while the kernel width \(\sigma\) was held constant at a value of 1.004454. Table 1 shows the area under the curve (AUC) measure in the test set for all the methods and simulated patterns (C1 to C5). The largest AUC value is highlighted in bold type, while the second largest AUC appears underlined. Because all the patterns are binary classification problems, AUC is a suitable performance metric that does not rely on a threshold and can balance type-I and -II errors adequately [8]. Table 1 shows that the top classification approaches vary depending on the simulated pattern: * _C1_: Logit is the best classifier, followed by ANN. * _C2_: \(k\)-NN is the best classifier, followed by RF. * _C3_: SVM is the best classifier, followed by RF. * _C4_: SVM is the best classifier, followed by \(k\)-NN. * _C5_: SVM is the best classifier, followed by RF. At this stage, the proposed approach can match the expected classifiers in the sense that the logistic regression performs best for the sole linear pattern (C1), while nonlinear methods (kernel-based SVM, random forest, and \(k\)-NN) perform best with the remaining four nonlinear patterns. For these patterns, the logit classifier does not perform well. Although it is clear that "there is no free lunch", we can still make recommendations for classifiers if the patterns are identified correctly. We define two different sets of experiments based on the five patterns for model training. We first consider two-dimensional matrices of similar size (i.e., \(m=1000\) and \(n=2\)). We refer to this meta-learning approach as CNN1. We also consider heterogeneous datasets in terms of the number of samples and variables: we generated the patterns using a random number of samples between 800 and 1400 and introduced 0 to 4 irrelevant variables (Gaussian noise), adding them to the two-dimensional patterns (i.e., \(m=[800,1400]\) and \(n=[2,6]\)). We refer to this approach as CNN2. Note that the goal of training CNNs with a heterogeneous dataset in terms of size is to consider the fact that the goal is to apply the final model to real-world data, which are clearly heterogeneous. Training with tabular datasets of different sizes (CNN2) is then a more realistic approach to meta-learning. However, transformations are required before model training. As mentioned in the previous section, we combined two simple strategies to manage "images" of different sizes (simulated tabular datasets in the proposed case): (1) PCA to shrink the datasets that are larger than a target average sample size (1099 observations for each tabular dataset, where a total of 1099 components are selected); and (2) padding to create additional space within a dataset smaller than the target average size. Padding is also used to achieve the \begin{table} \begin{tabular}{l c c c c c} \hline \hline Classifier & C1 & C2 & C3 & C4 & C5 \\ \hline Logit & **0.938** & 0.590 & 0.954 & 0.609 & 0.904 \\ DT & 0.859 & 0.803 & 0.890 & 0.845 & 0.903 \\ \(k\)-NN & 0.909 & **0.885** & 0.965 & 0.966 & 0.970 \\ RF & 0.910 & 0.884 & 0.966 & 0.962 & 0.981 \\ ANN & 0.926 & 0.577 & 0.953 & 0.691 & 0.867 \\ SVM & 0.920 & 0.882 & **0.968** & **0.981** & **0.985** \\ \hline \hline \end{tabular} \end{table} Table 1: Predictive performance for the five simulated patterns. target size of 7, which is the maximum size of a tabular input \(\{\mathbf{X},\mathbf{y}\}\) with the two-dimensional pattern, four irrelevant variables, and the label vector \(\mathbf{y}\). The network used is a straightforward two-stage architecture. The first stage is used for feature extraction and learning and consists of two Conv2D layers with 32 filters and a kernel size of 5x5, followed by a dropout layer with a rate of 0.25 and then two Conv2D layers with 64 filters each and a kernel size of 3x3. The first two Conv2D layers used a kernel size of 5x5 for more general feature extraction, while the third and fourth used 3x3 kernels to extract more detailed features from the data. Combinations of kernel sizes in CNNs have yielded successful results in object recognition tasks [34, 9]. The second stage performs the classification task and begins with a flattened layer, followed by a dense layer with 256 neurons, a dropout layer with a rate of 0.5, another dense layer with 5 neurons, and finally a softmax layer with five outputs for the 5 simulated patterns proposed in this experimental setting. The network was trained with a batch size of 1000 tabular datasets and 10 epochs. We used Adam as the optimizer, which has shown excellent empirical results in terms of efficiency, scalability, and performance (i.e., faster convergence) [1]. Adam is a first-order gradient-based solver stochastic optimization with adaptive learning rates, allowing a more effective and smooth learning process. The following values were considered for the optimization process with Adam: \(\alpha=1e-03\) (initial learning rate), \(\beta_{1}=0.9\) and \(\beta_{2}=0.999\) (parameters that control the exponential decay rates of the moving averages), and \(\epsilon=1e-07\) (conditioning parameter). These parameter values were selected according to the original Adam paper [1]. The proposed methodology was implemented in the TensorFlow Python library. Table 2 shows the performance of five different meta-learning strategies by considering various multiclass metrics: micro and macro averages for the precision, recall, and f1 (harmonic mean of the precision and recall). We consider the proposed four CNN variants and the meta-learning approach suggested in [3] (MF+DT), which consists of constructing several meta-features and using a decision tree for model training with the meta-features as inputs. This latter strategy includes several meta-learning studies given the set of meta-features considered in this study (see [3, 6, 17, 24]). We consider the heterogeneous dataset used to train CNN2 to construct the meta-features. Table 2 shows that the two proposed CNN variants achieved similar performances with nearly perfect classification. We can conclude that the proposed approach can learn from various simulated patterns and identify them in other unseen tabular datasets with marginal variations. In contrast, the two-step approach that constructs meta-features and then implements a classifier is clearly inferior in terms of its predictive capabilities, with an accuracy of approximately 75% on average. ### Application on benchmark datasets Next, we explore the ability of the proposed model to make suggestions for suitable classifiers. We compare the proposed CNN2 approach with the alternative strategy based on meta feta-features (MF+DT) on well-known binary classification datasets from the UCI Repository [5]. CNN2 is chosen because its construction is designed to manage heterogeneous datasets, although CNN1 achieved marginally better classification results. Also, CNN2 and MF+DT consider the same input information and are therefore comparable. Table 3 shows relevant metadata for all the benchmark datasets, including the number of examples \(m\), the number of variables \(n\), and the percentage of examples in each class (min.,maj. ), and the imbalance ratio (IR), computed as the fraction of the majority class and minority class samples. The tabular datasets exhibit important differences in terms of size and imbalance ratio, which is important to assess meta-learning approaches properly (see Table 3). Note that these datasets have no "class" in the sense that they have no known linear/nonlinear pattern. Therefore, multiclass classification performance cannot be computed. However, we can first compute the binary classification performance for each tabular dataset and then compare it with the patterns suggested with both CNN and MF+DT. \begin{table} \begin{tabular}{l l l l} \hline Perf. Measure & CNN1 & CNN2 & MF+DT \\ \hline precision (macro) & 0.996 & 0.986 & 0.790 \\ recall (macro) & 0.996 & 0.985 & 0.752 \\ f1-score (macro) & 0.996 & 0.985 & 0.745 \\ f1-score (micro) & 0.996 & 0.985 & 0.752 \\ \hline \end{tabular} \end{table} Table 2: Predictive performance for the various meta-learning approaches. The results of the various classifiers on the 23 datasets are shown in Table 4. The best classifier for each dataset is highlighted in bold type. \begin{table} \begin{tabular}{c c c c c c c} \hline \hline ID & Logit & DT & \(k\)-NN & RF & ANN & SVM \\ \hline ds1 & 0.777 & 0.82 & **0.892** & 0.864 & 0.793 & 0.822 \\ ds2 & 0.865 & 0.863 & 0.844 & 0.865 & 0.858 & **0.866** \\ ds3 & 0.982 & 0.977 & 0.997 & 0.986 & **1** & **1** \\ ds4 & 0.544 & **0.644** & 0.56 & 0.613 & 0.604 & 0.5 \\ ds5 & 0.51 & 0.594 & **0.719** & 0.671 & 0.676 & 0.702 \\ ds6 & 0.647 & 0.514 & 0.616 & **0.666** & 0.527 & 0.617 \\ ds7 & 0.701 & 0.755 & 0.594 & **0.79** & 0.759 & 0.775 \\ ds8 & 0.5 & 0.519 & 0.625 & **0.745** & 0.5 & 0.5 \\ ds9 & 0.919 & 0.96 & **0.994** & 0.987 & 0.978 & 0.981 \\ ds10 & 0.504 & 0.556 & **0.65** & 0.611 & 0.559 & 0.536 \\ ds11 & 0.761 & 0.786 & 0.886 & **0.935** & 0.912 & 0.932 \\ ds12 & 0.771 & 0.822 & 0.93 & **0.941** & 0.906 & 0.899 \\ ds13 & 0.5 & 0.5 & **0.577** & 0.5 & 0.5 & 0.5 \\ ds14 & 0.667 & 0.753 & **0.842** & 0.804 & 0.745 & 0.792 \\ ds15 & 0.538 & 0.498 & 0.591 & **0.634** & 0.599 & 0.585 \\ ds16 & **0.824** & 0.82 & 0.813 & 0.821 & 0.821 & 0.821 \\ ds17 & 0.769 & 0.817 & **0.952** & 0.929 & 0.793 & 0.826 \\ ds18 & 0.68 & 0.681 & 0.85 & **0.771** & 0.757 & 0.68 \\ ds19 & 0.743 & 0.723 & 0.787 & 0.783 & **0.827** & 0.807 \\ ds20 & 0.692 & 0.675 & **0.697** & 0.677 & 0.695 & 0.695 \\ ds21 & **0.867** & 0.803 & 0.84 & 0.862 & 0.866 & **0.867** \\ ds22 & 0.876 & 0.93 & 0.941 & **0.946** & 0.865 & 0.913 \\ ds23 & 0.957 & 0.976 & 0.979 & **0.981** & 0.975 & 0.972 \\ \hline \hline \end{tabular} \end{table} Table 4: Predictive performance for the various benchmark datasets. \begin{table} \begin{tabular}{c l c c c} \hline \hline ID & dataset & \(m\) & \(n\) & \%class(min,maj) & IR \\ \hline ds1 & abalone7 & 4177 & 8 & (9.3, 90.7) & 9.7 \\ ds2 & australian-credit & 690 & 14 & (44.5,55.5) & 1.2 \\ ds3 & banknote-auth & 1372 & 4 & (44.5,55.5) & 1.2 \\ ds4 & breast-cancer & 569 & 30 & (37.3,62.7) & 1.7 \\ ds5 & bupa-liver & 345 & 6 & (42, 58) & 1.4 \\ ds6 & german-credit & 1000 & 24 & (30.0,70.0) & 2.3 \\ ds7 & hearth-statlog & 270 & 13 & (44.4,55.6) & 1.3 \\ ds8 & horse-colic & 300 & 27 & (33.0,67.0) & 2.0 \\ ds9 & image-1 & 2310 & 19 & (14.3, 85.7) & 6 \\ ds10 & image-5 & 2310 & 19 & (38.1,61.9) & 1.6 \\ ds11 & ionosphere & 351 & 34 & (35.9,64.1) & 1.8 \\ ds12 & monk-2 & 432 & 6 & (47.2,52.8) & 1.1 \\ ds13 & oil-spill & 937 & 49 & (4.4,95.6) & 21.9 \\ ds14 & phoneme & 5404 & 19 & (29.3,70.7) & 2.4 \\ ds15 & pima-diabetes & 768 & 8 & (34.9,65.1) & 1.9 \\ ds16 & ring & 7400 & 20 & (49.5,50.5) & 1.0 \\ ds17 & solar-flares-M & 1389 & 10 & (4.9,95.1) & 19.4 \\ ds18 & sonar & 208 & 60 & (46.6,53.4) & 1.4 \\ ds19 & splice & 1000 & 60 & (48.3,51.7) & 1.1 \\ ds20 & titanic & 2201 & 3 & (32.3,67.7) & 2.1 \\ ds21 & waveform & 5000 & 21 & (33.1,66.9) & 2.0 \\ ds22 & yeast02579vs368 & 1004 & 8 & (9.9, 90.1) & 9.1 \\ ds23 & yeast5 & 1484 & 8 & (3.0, 97.0) & 32.78 \\ \hline \hline \end{tabular} \end{table} Table 3: Descriptive statistics for all the benchmark datasets. Table 4 shows that no method outperforms others in terms of performance. In contrast to the performance of these classifiers with simulated data (see Table 1), random forest and \(k\)-NN achieved a better performance than SVM. These results confirm the advantages of RF when facing noisy mixed-type data (numerical and dummy variables). Next, the results of the proposed CNN approach for meta-learning are reported in Table 5. For each dataset, the predicted probability of belonging to a given training pattern is shown. The largest predicted probability for a given dataset is highlighted in bold type. The best classifier (bc) found in Table 4 is also reported, and whether the recommendation is successful or not (column "hit"). A "hit" occurs when one of the two classifiers recommended by the predicted class coincides with the best classifier bc. For ds1, for example, C2 is the predicted class with a probability of 0.978, and therefore, the recommendations for this dataset based on the simulated data are first \(k\)-NN and then RF (see Table 1). The best classifier is \(k\)-NN, and therefore, the recommendation is valid. A hit is highlighted with the symbol \(\checkmark\), also including the relative position of the recommended classifier (1 or 2). In contrast, for dataset ds4, the recommendations are also \(k\)-NN and RF (class C2, with a confidence of 0.947), but the best classifier is DT, and therefore, the recommendation is not successful (hit=\(\checkmark\)). Note that there is a tie for bc in three of the 23 datasets. In such cases, the best classifier that is considered a hit appears underlined in case one of the two best classifiers is indeed recommended by the CNN. Table 5 demonstrates the advantages of the CNN: recommendations are successful 78.2% of the time (18 of the 23 datasets). Because a random recommender would achieve a 33.3% success rate at suggesting two of the six classifiers, the proposed meta-learning approach clearly outperforms a random model. Of the 19 hits, 13 correspond to the second-best model, while only six correspond to the best model. However, at least two classifiers should be recommended because the performance of the top method is typically similar to that of the second-best method. Table 5 also shows the confidence of the CNN in predicting the training pattern: the largest probability is below 0.8 only in three of the 23 datasets. This result demonstrates that the method indeed identifies one of the various patterns in the datasets. Finally, we compare the performance of the proposed method with the traditional two-step approach. Table 6 shows the results when a DT is applied on meta-features. The identified training pattern is highlighted with the symbol \(\clubsuit\). In the case of ties, the best classifier (bc) that is recommended by the meta-learning method appears underlined. Note that landmarks are not included because they consider information that is not available for the proposed approach (e.g., the performance of the various traditional classifiers). \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline ID & C1 & C2 & C3 & C4 & C5 & bc & hit \\ \hline ds1 & 0.022 & **0.978** & 0 & 0 & 0 & k-NN & \(\checkmark\)(1) \\ ds2 & **0.965** & 0.026 & 0 & 0 & 0.009 & SVM & \(\checkmark\) \\ ds3 & 0.044 & 0 & 0 & 0.057 & **0.898** & ANN/SVM & \(\checkmark\)(1) \\ ds4 & 0.041 & **0.947** & 0.001 & 0.001 & 0.01 & DT & \(\checkmark\) \\ ds5 & 0.163 & **0.679** & 0.02 & 0.026 & 0.111 & k-NN & \(\checkmark\)(1) \\ ds6 & 0.028 & 0.007 & 0.008 & 0.142 & **0.815** & RF & \(\checkmark\)(2) \\ ds7 & 0.09 & **0.869** & 0.009 & 0.008 & 0.025 & RF & \(\checkmark\)(2) \\ ds8 & 0 & 0 & 0.006 & 0.078 & **0.916** & RF & \(\checkmark\)(2) \\ ds9 & 0.002 & 0.15 & 0 & **0.848** & 0 & k-NN & \(\checkmark\)(2) \\ ds10 & 0 & 0.004 & 0 & **0.996** & 0 & k-NN & \(\checkmark\)(2) \\ ds11 & 0.007 & **0.992** & 0 & 0 & 0.001 & RF & \(\checkmark\)(2) \\ ds12 & 0.002 & **0.998** & 0 & 0 & 0 & RF & \(\checkmark\)(2) \\ ds13 & 0 & 0 & 0 & **0.994** & 0.006 & k-NN & \(\checkmark\)(2) \\ ds14 & 0 & **1** & 0 & 0 & 0 & k-NN & \(\checkmark\)(1) \\ ds15 & 0 & 0 & 0 & 0 & **1** & RF & \(\checkmark\)(2) \\ ds16 & 0 & **0.98** & 0.02 & 0 & 0 & logit & \(\checkmark\) \\ ds17 & 0.028 & 0.444 & 0 & **0.528** & 0 & k-NN & \(\checkmark\)(2) \\ ds18 & 0.126 & **0.559** & 0.075 & 0.143 & 0.097 & RF & \(\checkmark\)(2) \\ ds19 & 0.038 & 0.003 & 0.004 & 0.016 & **0.939** & ANN & \(\checkmark\) \\ ds20 & 0.008 & 0.147 & 0 & **0.845** & 0 & k-NN & \(\checkmark\)(2) \\ ds21 & 0 & 0.069 & **0.931** & 0 & 0 & logit/SVM & \(\checkmark\)(1) \\ ds22 & 0.019 & 0.01 & **0.895** & 0.064 & 0.012 & RF & \(\checkmark\)(2) \\ ds23 & **0.99** & 0.008 & 0 & 0.002 & 0 & RF & \(\checkmark\) \\ \hline \hline \end{tabular} \end{table} Table 5: Predictions for the proposed CNN approach. Table 6 shows that the alternative meta-learning approach is not as successful as the proposed CNN: recommendations are successful 65.2% of the time (15 of the 23 datasets). However, this approach clearly outperforms a random recommender. The proposed method makes successful recommendations 78.2% of the time compared to 65.2% with the standard two-step approach (15 of the 23 datasets). From the 15 hits, 11 correspond to the second best model. In this sense, the alternative method is worse than the proposed CNN at identifying the best performing classifier. ## 5 Discussion and Conclusions This paper proposes a novel meta-learning approach in which CNNs are used to learn directly from tabular datasets without the need to construct meta-features. The goal is to avoid any loss of information that results from a two-step approach. The proposed model can make accurate classifier recommendations for this type of dataset, outperforming a two-step approach based on meta-features. The primary challenge is to define a comprehensible set of simulated patterns that can be extrapolated to real-world datasets. Results show that CNNs that are used in a supervised manner can achieve nearly perfect identification of simulated patterns; however, the capacity that the proposed model has to extrapolate this knowledge to real-world datasets requires further research. Fortunately, the experiments in this study show promising results. There are some methodological challenges that can be improved. For example, we can use more sophisticated strategies to match datasets of different sizes on a CNN, such as spatial pyramid pooling (SPP). This method consists of a special type of pooling layer that allows the inclusion of images of heterogeneous sizes. An SPP layer pools the features in an image and generates outputs that feed the dense layers [26]. Also, different CNN architectures could be explored. These challenges, however, would have a minor impact in the proposed framework because the classification accuracy is already near 100% on the simulated datasets. This study should be seen as a first attempt in the development of a recommender system that can defy the NFL theorem. Therefore, this study opens interesting lines for future research. Apart from the identification of new patterns that can be simulated and subsequently linked to classifiers, the proposed method can be used to learn from more sophisticated data structures, such as text or images. The proposed model can also be tailored to the class-imbalance problem, in which \begin{table} \begin{tabular}{c c c c c c c} \hline \hline ID & C1 & C2 & C3 & C4 & C5 & bc & hit \\ \hline ds1 & & & & & \(\bigstar\) & k-NN & ✗ \\ ds2 & & & & & SVM & ✓(1) \\ ds3 & ✗ & & & & ANN/SVM & ✓(2) \\ ds4 & & & & & DT & ✗ \\ ds5 & & ✗ & & & k-NN & ✓(1) \\ ds6 & & & & & \(\bigstar\) & RF & ✓(2) \\ ds7 & & & & & \(\bigstar\) & RF & ✓(2) \\ ds8 & & & & & \(\bigstar\) & RF & ✓(2) \\ ds9 & & & & & \(\bigstar\) & k-NN & ✗ \\ ds10 & & & & & & k-NN & ✗ \\ ds11 & ✗ & & & & & RF & ✗ \\ ds12 & & ✗ & & & & RF & ✓(2) \\ ds13 & & & & & & k-NN & ✗ \\ ds14 & & & & & & k-NN & ✓(2) \\ ds15 & & & & & & RF & ✓(2) \\ ds16 & ✗ & & & & & logit & ✓(1) \\ ds17 & & & & & & k-NN & ✗ \\ ds18 & & & & & & RF & ✓(2) \\ ds19 & ✗ & & & & & ANN & ✓(2) \\ ds20 & ✗ & & & & & k-NN & ✓(1) \\ ds21 & ✗ & & & & & logit/SVM & ✗ \\ ds22 & & & & & & RF & ✓(2) \\ ds23 & & & & & & RF & ✓(2) \\ \hline \hline \end{tabular} \end{table} Table 6: Predictions for the tree-based meta-learning approach trained on meta-features. classifiers and resampling strategies can be recommended. Finally, generative models can be considered to provide a wider diversity of training patterns. ## Acknowledgements The authors gratefully acknowledge financial support from ANID, PIA/PUENTE AFB220003 and FONDECYT-Chile, grants 1200221 and 11200007.
2307.08466
Generalizable Classification of UHF Partial Discharge Signals in Gas-Insulated HVDC Systems Using Neural Networks
Undetected partial discharges (PDs) are a safety critical issue in high voltage (HV) gas insulated systems (GIS). While the diagnosis of PDs under AC voltage is well-established, the analysis of PDs under DC voltage remains an active research field. A key focus of these investigations is the classification of different PD sources to enable subsequent sophisticated analysis. In this paper, we propose and analyze a neural network-based approach for classifying PD signals caused by metallic protrusions and conductive particles on the insulator of HVDC GIS, without relying on pulse sequence analysis features. In contrast to previous approaches, our proposed model can discriminate the studied PD signals obtained at negative and positive potentials, while also generalizing to unseen operating voltage multiples. Additionally, we compare the performance of time- and frequency-domain input signals and explore the impact of different normalization schemes to mitigate the influence of free-space path loss between the sensor and defect location.
Steffen Seitz, Thomas Götz, Christopher Lindenberg, Ronald Tetzlaff, Stephan Schlegel
2023-07-17T13:21:02Z
http://arxiv.org/abs/2307.08466v2
Generalizable Classification of UHF Partial Discharge Signals in Gas-Insulated HVDC Systems Using Neural Networks ###### Abstract Undetected partial discharges (PDs) are a safety critical issue in high voltage (HV) gas insulated systems (GIS). While the diagnosis of PDs under AC voltage is well-established, the analysis of PDs under DC voltage remains an active research field. A key focus of these investigations is the classification of different PD sources to enable subsequent sophisticated analysis. In this paper, we propose and analyze a neural network-based approach for classifying PD signals caused by metallic protrusions and conductive particles on the insulator of HVDC GIS, without relying on pulse sequence analysis features. In contrast to previous approaches, our proposed model can discriminate the studied PD signals obtained at negative and positive potentials, while also generalizing to unseen operating voltage multiples. Additionally, we compare the performance of time- and frequency-domain input signals and explore the impact of different normalization schemes to mitigate the influence of free-space path loss between the sensor and defect location. Fault diagnosis, HVDC, partial discharge, neural networks, machine learning. ## I Introduction The increasing integration of renewable energy sources into the existing high-voltage grid requires the use of high-voltage direct current (HVDC) systems. This technology is superior to conventional AC technology for transmitting large amounts of power over long distances because of lower losses, and the elimination of reactive power. Apart from high efficiency, the compact installation of high-voltage equipment is also a critical consideration. Both requirements, high-efficiency power transmission and space-saving installation, are met by gas-insulated systems (GIS), which have been developed for use in transmission systems under AC voltage stress since the 1960s. A crucial aspect of ensuring fault-free operation in HVDC GIS is the automatic classification of PD-generating defects prior to interpreting the results in terms of fault-free operation [1]. Figure 1 provides a visual representation of an HVDC GIS and illustrates common sources of PDs, including solid metallic particles on the insulation, on free potential or freely moving in the insulating gas, and conducting protrusions on the encapsulation or conductor [2]. In contrast to conventional AC GIS, the number of these devices in operation under DC stress is rather low. Thus, the measurement [3], classification [4], and physical interpretation [5] of DC PDs continue to be active areas of research. Recently, studies investigating PD development in HVDC GIS indicate that the physical processes responsible for PD formation are the same as compared to those observed under AC voltage stress [6]. However, due to the constant electric field, continuous directed movement of charge carriers, and the generation of space and surface charges, the behavior of DC PDs events, such as amplitude and repetition rate, differs significantly from AC PDs [2]. As a result, the methods and findings related to AC PD classification cannot be directly applied to HVDC GIS. For example, the well-established AC GIS PD detection method, which relies on measurements in the ultra-high frequency (UHF) range and human expert evaluation of phase-resolved partial discharge (PRPD) plots [7, 8], is not suitable for distinguishing DC PD source signals due to the lack of necessary phase information. Consequently, the development and testing of novel DC-specific PD classification methods are essential for ensuring the safety of HVDC GIS. The most advanced technique for evaluating and identifying PDs under DC voltage stress is pulse sequence analysis (PSA) followed by the assessment of patterns by human experts [9]. This method leverages the amplitude and time information of individual PD events in the UHF signal to identify the underlying defect [10]. However, this approach relies on time-consuming human judgment and it is limited to scenarios with Fig. 1: Schematic representation of a HVDC gas-insulated system and related typical partial discharges generating faults. The presented work aims to classicate UHF PD signals caused by particle- \�O and protrusion-based \�O defects. [2] a single PD source. In situations where multiple sources are active, the PD signals overlap, hindering clear identification and necessitating the use of complex techniques for source separation [11, 12]. To circumvent the time-intensive process of human evaluation, Schober and Schichler [13] proposed a machine learning-based approach for autonomously classifying DC PD signals in GIS. Their novel approach employed Support Vector Machines and MLP to assess PSA features derived from UHF measurements. Despite its promising results, this method inherits the limitations associated with hand-crafted PSA features, such as challenges in classifying signals of multiple active PD sources. Furthermore, evaluation methods based on neural networks have shown promising results in effectively distinguishing multiple active signal sources in similar domains, when directly applied to time-domain measurements [14, 15]. Consequently, when these techniques are used for DC GIS PD classification, they could potentially eliminate the need for PSA feature evaluation and source separation techniques. Recently, Beura et al. [16] introduced a first neural network-based method for autonomous feature extraction from UHF time-domain measurements. Their study showed results comparable to the pre-existing PSA methods [13], proving the efficiency of this new approach. However, several critical questions that are essential for the practical application of this method remain open. First, as already illustrated in Figure 1, DC PDs in GIS originate from multiple sources. Among these source types, PDs generated by particles on the insulator surface are particularly significant [10]. However, the existing literature on PD classification using neural networks based on UHF measurements does not provide sufficient evidence regarding the capability to classify this specific fault type. Moreover, there is a lack of information regarding the classification accuracy of UHF signal-based neural network models specifically for protrusions and insulator particles at both negative and positive polarities of the inception voltage. This information is crucial for a risk assessment of the asset and a successful transfer of the laboratory experiment results to on-site installations [2]. In addition, previous studies under AC stress have demonstrated that utilizing fast Fourier transform (FFT) coefficients extracted from UHF signals as training data yields improved classification results in similar AC classification tasks [17]. Therefore, it would be advantageous to explore the transferability of these findings to the domain of DC PD classification. Furthermore, all typical defects of HVDC GIS can experience stress from various multiples of the inception voltage \(U_{\mathrm{i}}\). These individual voltages lead to slightly different discharge patterns [2]. However, the specific \(U_{\mathrm{i}}\) at the on-site GIS is not known in practice and creating a suitable amount of data for every combination of the defect type and \(U_{\mathrm{i}}\) multiple is impractical for experts. Thus, it is essential to investigate the ability of possible DC PD classification models to generalize from laboratory-based signals to measurements based on unseen \(U_{\mathrm{i}}\) multiples of the GIS. Another challenge in HVDC GIS arises from the amplitude of UHF PD signals, which is influenced by free-space path loss, particularly when the signal has to cross barrier insulators. Thus, the amplitude is proportional to the distance between the defect location and the sensor. As this distance varies in applications outside laboratory tests, it might be advantageous to generally exclude amplitude-related information during model training by applying different normalization methods. In summary, this study contributes to the HVDC GIS PD classification in three significant aspects: * First, we extend the approach of Beura et al. [16] by providing the missing evidence that a neural network-based architecture can effectively classify DC PD time-domain UHF signals originating from particles on an insulator and fixed metallic protrusions at both negative and positive DC voltage stress. Furthermore, we aim to improve the classification performance by incorporating frequency-domain signals and exploring different layer configurations, including the number, ordering, and hyperparameters (stride, kernel size) of our network. * Second, we present the first investigation of an HVDC PD classification model in terms of its ability to classify measurements obtained at multiples of the inception voltage \(U_{\mathrm{i}}\) that have never been included in the model training. By doing so, we seek to gain insights into the transfer learning capabilities of the model, allowing for generalization from laboratory data to on-site installations under unknown DC voltage stress levels. * Third, we analyze the impact of different normalization methods on the performance of an HVDC PD classification model to mitigate potential influences of free-space path losses between the sensor and defect locations. This analysis aims to enhance the robustness of the model in real-world scenarios where the distance between the sensor and defect location may vary. ## II Experimental Setup & Methods This study is conducted using UHF DC-PD signals measured under DC voltage stress, as documented in Gotz [2] and Fig. 2: Picture of the experimental setup (top) and three-dimensional schematics of the electrode arrangements used for the emulation of DC-PD at the gas-solid interface (bottom, left) and at a fixed metallic protrusion (bottom, right) [2]. Gotz et al. [6]. The measurements were performed in a gas-insulated test setup shown in Figure 2 (top). Similar to Beura et al. [16], model electrode arrays are installed inside a test vessel to simulate the typical behaviour of the defects. The tests were performed using sulfur-hexafluoride (SF\({}_{6}\)) as the insulation gas at an absolute pressure of 0.5 MPa. In the protrusion arrangement (Figure 2; bottom, right), the metallic protruding needle has a length of 5 mm and the needle tip is installed at a distance of \(55\) mm from the opposing high-voltage electrode. To simulate PD at the gas-solid interface (Figure 2; bottom, left), a \(13\) mm needle is placed on the surface of an epoxy insulator, typically used in gas-insulated systems. The distance between the needle tip and the opposing high-voltage electrode is 26.7 mm. The high DC voltage with positive and negative polarity up to voltages of 250 kV is applied via the air-SF\({}_{6}\) bushing installed in the centre of the test vessel. PD signals are detected using a standard UHF sensor [18] installed in the test vessel. A 30 dB amplifier is used to increase the signal-to-noise ratio. The time-dependent PD signals are sampled at \(10\) GS/s using a Teledyne LeCroy WavePro 735 ZiA digital oscilloscope. Each \(i\)-th measurement \(M_{j}=\{s_{j,1},...,s_{j,u},...,s_{j,l_{s}}\},j\in\{1,\ldots,N\}\) has a length of \(l_{\text{s}}=20002\) individual samples \(s_{j,u}|s\in\mathbb{R}\), sampled at time step \(u\). The resulting data set contains a total of \(N=33000\) individual PD measurements taken at different multiples of the inception voltage \(U_{\text{i}}\). A so-called source class label is assigned to each \(M_{j}\) based on the defect type, the polarity of the needle and the \(U_{\text{i}}\) multiple. Note that the \(U_{\text{i}}\)-information is only utilized to investigate the generalization capability of the model (Section III-B). In the classification task the model discards this information and learns to predict the so-called output class labels depicted in Figure 3 based on the given source class measurement. Thus, the output class labels are based only on the defect type and needle polarity. The number of available source class measurements is shown in Table I. In each of our experiments, the data is randomly drawn from these recorded measurements and divided into a training and a test set by allocating 80 % of the measurements of each class to the training set \(D_{\text{train}}\) and the remaining 20 % to the test set \(D_{\text{test}}\) which are normalized using three different methods. ### _Normalization methods_ The objective of data normalization or min-max scaling is to standardize features to a consistent scale. This typically leads to improved performance and training stability of the model. However, dependent on the choice of the method, its application may affect the information contained in the signal. Normalization can be applied in three different ways: #### Iii-A1 Trainset normalization (Tr) In trainset normalization, every sample in each measurement \(M_{j}\) within \(D_{\text{train}}\) and \(D_{\text{test}}\) is normalized between \(-1\) and \(+1\) according to: \[\bar{s}_{j,u}=\frac{s_{j,u}-\text{Min}(D_{\text{train}})}{\text{Max}(D_{ \text{train}})-\text{Min}(D_{\text{train}})}. \tag{1}\] Min/Max\((\cdot)\) are operations that return the min-/maximum value of any individual measurement within the dataset. As shown in equation (1), trainset normalization scales each sample in each measurement with respect to the maximum and minimum sample value of all individual measurements across all classes in the dataset. Thus, this method preserves the amplitude information for each measurement in the dataset. However, due to the previously stated free-space path loss problem, it might be advantageous to generally exclude amplitude-related information during model training. #### Iii-A2 Class normalization (Cl) Class normalization scales each sample within a measurement relative to the maximum and minimum sample values of all measurements belonging to the corresponding source class from Table I. In this method, the amplitude-related information is preserved within samples of one class, while it is hidden between measurements of different classes. #### Iii-A3 Measurement normalization (Me) In measurement normalization, each sample in a measurement \(M_{j}\) is scaled based on the maximum and minimum sample values within that specific measurement period in the dataset. In this case, the model can no longer rely on amplitude-related features to classify different samples of class in the dataset. Fig. 3: Time-domain signal examples of the considered defects. Each PD measurement is based on either particles at the gas-solid interface (Pa\({}^{-}\), Pa\({}^{+}\)) or protrusions (Pr\({}^{-}\), Pr\({}^{+}\)) at negative and positive potential of the needle electrode. \begin{table} \begin{tabular}{c c|c|c|c|c|c} \hline \hline & \multicolumn{3}{c}{Negative Polarity} & \multicolumn{3}{c}{Positive Polarity} \\ \cline{2-7} & \(\text{Pa}_{1\text{-}U_{\text{i}}}^{-}\) & \(\text{Pa}_{1\text{-}U_{\text{i}}}^{-}\) & \(\text{Pa}_{3\text{-}U_{\text{i}}}^{-}\) & \(\text{Pr}_{1\text{-}U_{\text{i}}}^{+}\) & \(\text{Pa}_{1\text{-}2\text{-}U_{\text{i}}}^{+}\) & \(\text{Pa}_{1\text{-}5\text{-}U_{\text{i}}}^{+}\) \\ \cline{2-7} & 3500 & 3500 & 3500 & 3500 & 3500 & 3500 \\ \hline \hline & \multicolumn{1}{c}{Pr\({}_{2\text{-}U_{\text{i}}}^{-}\)} & \(\text{Pr}_{3\text{-}U_{\text{i}}}^{-}\) & - & \(\text{Pr}_{2\text{-}U_{\text{i}}}^{+}\) & - & - \\ \cline{2-7} & 4000 & 4000 & - & 4000 & - & - \\ \hline \hline \end{tabular} \end{table} TABLE I: Number of available measurements \(M_{j}\) of each source class in the dataset of [2]. ### _Classification model_ In our experiments, the normalized UHF amplitude signal measurements and their respective FFT coefficients are separately used to train the neural networks-based model as depicted in Figure 4. These networks use sample measurements and their assigned output class labels in \(D_{\mathrm{train}}\) to learn their weighting and bias parameters and ultimately predict the correct output class label. The architecture in our work is based on 1D-convolutional [19] and perceptron layers [20]. We tested different numbers and arrangements of these layer types in conjunction with different activation functions such as ReLU, Sigmoid, and Tanh to achieve an improved classification result. For brevity, only the final model structure is given. This structure and the corresponding activation functions were empirically determined with respect to maximizing classification performance. In the final architecture, the 1D-convolutional layers perform sequential, discrete convolutions between a filter kernel and the data. The extracted features are processed by a ReLU activation function after each layer followed by an average pooling layer and a flattening layer. The data is further processed by the perceptron layer with 512 neurons and a ReLU activation function, which maps the extracted information to the output classes depicted in Figure 3. The model parameter adjustment is performed using back-propagation and the optimal parameter set is determined by optimizing a cross-entropy loss using ADAM optimization [21]. The initial learning rate of 0.0001 and a batch size of 64 were determined through a random search. In each of our experiments, the network is initialized with 15 different weight initialization seeds to account for the effects of a non-optimal start to the optimization process. The number of individual seeds was chosen to represent a reasonable trade-off between the statistical robustness of the result and the computational time of the model. The performance of each trained model is evaluated using unseen data from \(D_{\mathrm{test}}\). In addition, similar to k-fold cross-validation, a different set of measurements from the limited experimental data are randomly assigned to \(D_{\mathrm{train}}\) and \(D_{\mathrm{test}}\) in each of the 15 training and testing procedures. The obtained classification results are averaged to determine a more reliable true positive rate \(A\) of the proposed architecture for each defect class. Similarly, the true negative, false negative and false positive rates of the model are determined for all other output classes. ## III Experiments & Results ### _Time- and frequency-domain classification results_ In this section, we analyze the capability of the proposed architecture to classify unseen measurements of PD generated by a fixed protrusion or a particle adhering to the gas-solid interface. To accomplish this, we utilize UHF signals measured in either the time-domain or frequency-domain at a specific \(U_{\mathrm{i}}\) multiple for each output class. In this initial experiment, the baseline dataset D comprises measurements of \(\text{Pa}^{-}_{1\cdot U_{\mathrm{i}}}\), \(\text{Pa}^{+}_{1\cdot U_{\mathrm{i}}}\), \(\text{Pr}^{-}_{2\cdot U_{\mathrm{i}}}\), and \(\text{Pr}^{+}_{2\cdot U_{\mathrm{i}}}\), which are then divided into \(D_{\mathrm{train}}\) and \(D_{\mathrm{test}}\). However, while it would be beneficial to classify the output class based on data measured at the same constant multiple of \(U_{\mathrm{i}}\), for technical reasons the available dataset only contained an unbalanced measurement distribution [2]. As a result, there is an unequal distribution of measurements recorded at the same \(U_{\mathrm{i}}\) multiples for each output class, as shown in Table I. Figure 5 illustrates the confusion matrix of the experiment involving the proposed output classes \(\text{Pa}^{+}_{1\cdot U_{\mathrm{i}}}\), \(\text{Pa}^{+}_{1\cdot U_{\mathrm{i}}}\), \(\text{Pr}^{-}_{2\cdot U_{\mathrm{i}}}\) and \(\text{Pr}^{+}_{2\cdot U_{\mathrm{i}}}\). The model achieves a near-perfect true positive rate \(A\) for UHF signals resulting from positive particle (\(\text{Pa}^{+}\)) or positive protrusion (\(\text{Pr}^{+}\)) defects. For the remaining two output classes, the model obtains only a slightly worse result. In particular, it exhibits a nearly symmetrical confusion between measurements associated with negative particles (\(\text{Pa}^{-}\)) and negative protrusions (\(\text{Pr}^{-}\)). As mentioned in Section II-B, the model in this experiment was trained separately with 15 different weight initialization seeds. Thus, the presented classification rates of each class in the confusion matrix are the average across all individual runs. The overall performance of the architecture is represented by the average true positive rate \(\bar{A}\). It is determined by avera Fig. 4: The proposed model consists of five consecutive 1D-CNN layers with ReLU activation, followed by an average pooling- and a flatten layer. The features are fed into a fully connected multi-layer perceptron. This network has 512 ReLU neurons in the input layer and four output neurons with a softmax activation to assign one of the output classes (\(\text{Pa}^{-}\), \(\text{Pa}^{+}\), \(\text{Pr}^{-}\), \(\text{Pr}^{+}\)). positives of all output classes in the confusion matrix. In Figure 5, these values are indicated by the diagonal elements of the matrix, representing the true positive rate of correctly classified PD samples for each class, while the non-diagonal elements represent the false positive and true negative rates when comparing the predicted class against the ground truth. As summarized in Table II, the trained model achieves an \(\bar{A}_{\text{Tr}}=0.9653\) on the trainset normalized measurements in \(D_{\text{test}}\). The performance on time-domain test data is generally better if the amplitude information is at least partially-\((\bar{A}_{\text{CL, TD}}=0.9932)\), or completely- \((\bar{A}_{\text{Me, TD}}=0.9977)\) hidden after normalization. If the time-domain input data is converted to the frequency-domain instead, the classification rate based on the trainset normalized FFT coefficients only achieves an average of \(\bar{A}_{\text{Tr, FFT}}=0.8048\). Thus, if trainset normalization is selected and the model generalization to unseen \(U_{\text{i}}\) multiples is not required, time-domain data should be used to train the classifier. Analog to the time-domain signal, the FFT-based true positive rate is higher for class- and measurement normalization. Classifying PDs from the FFT coefficients of the UHF signal resulted in the highest classification performance in our experiments, with \(\bar{A}_{\text{Me, FFT}}=0.9983\). Therefore, the minor difference in performance compared to using the time-domain data hardly justifies the computational cost of converting every measurement to the frequency-domain. Thus, the following experiment relies solely on the time-domain data. ### _Transfer learning experiment results (generalization to an unseen multiple of \(U_{\text{i}}\))_ As previously mentioned in Section I, it is important to examine PD classification models in terms of their generalization to measurements of defects recorded at unknown multiples of the GIS inception voltage (\(U_{\text{i}}\)). Therefore, each PD measurement is assigned a source class based on the defect type and the inception voltage determined at the time of recording. Note that the exact value of \(U_{\text{i}}\) does not need to be further specified, since the approach of this work is centered around the idea of using laboratory data at specific \(U_{\text{i}}\) multiples to train a model that generalizes to data from unknown \(U_{\text{i}}\) multiples at the on-site GIS. In the first part of our experiment, we evaluate the generalization performance based on a transfer learning framework, where we pre-train our model based on the data of the same source classes used in Section III-A (\(\text{Pa}_{1\cdot U_{\text{i}}}^{-}\), \(\text{Pa}_{1\cdot U_{\text{i}}}^{+}\), \(\text{Pr}_{2\cdot U_{\text{i}}}^{-}\) and \(\text{Pr}_{2\cdot U_{\text{i}}}^{+}\)), to create a baseline for our further studies. Note, that analog to the previous experiment, any measurement is normalized by either trainset-, class- or measurement normalization. After the training process, the generalization performance of the baseline model is assessed by monitoring the model generalization rate \(G\), which is equal to the true positive rate of the model on the untrained \(\text{Pa}_{1.5\cdot U_{\text{i}}}^{+}\) measurements in the test set, i.e. \(G:=A(\text{Pa}_{1.5\cdot U_{\text{i}}}^{+})\). Note, that the data of \(\text{Pa}_{1.5\cdot U_{\text{i}}}^{+}\) is explicitly excluded from the training set in any of the further experiments. herefore, the described measurements are used only at inference time to test the model's generalization to an unseen source class. In this setting, the model can only rely on the knowledge learned from the source class data available in \(D_{\text{train}}\) to classify any of the \(\text{Pa}_{1.5\cdot U_{\text{i}}}^{+}\) measurements in \(D_{\text{test}}\). When trainset normalization (Tr) is used to normalize the baseline dataset (Base), the model achieved a generalization rate of \(G_{\text{Tr,Base}}=0.6801\). With class normalization (Cl), the model achieved a significantly lower result of \(G_{\text{Cl,Base}}=0.4010\), while with measurement normalization, the generalization rate of \(G_{\text{Me,Base}}=0.2085\) did not outperform a random classifier in classifying our four output classes (\(G_{\text{r}}=0.25\)). A possible explanation could be the lack of amplitude-related information after data preprocessing with these normalization methods. To investigate this issue further, we refine our basic model by gradually adding measurement data from a previously withheld source class to the trainset \(D_{\text{train}}\). The extended training set is used to train the model from scratch. In parallel, the generalization rate of the model is determined after each addition, analogous to the baseline experiment. The sequential training was performed for two different consecutive sequence orders ("Order 1" and "Order 2") to rule out any effects of adding a single record to \(D_{\text{train}}\). The results of this experiment shown in Figure 6 indicate, that the model achieves a generalization rate that is reliably above the baseline results, regardless of the normalization type and sequential ordering, after data from the first additional \(U_{\text{i}}\) multiple is added to the baseline dataset. After training the model with data from all available source classes in Table I (except for the \(\text{Pa}_{1.5\cdot U_{\text{i}}}^{+}\) data), a generalization performance is achieved that far exceeds random guessing. \begin{table} \begin{tabular}{c|c c c} \hline \hline Input & \multicolumn{3}{c}{Normalization Method} \\ Datatype & Measurement (Me) & Class (Cl) & Trainset (Tr) \\ \hline Time-Domain (TD) & **0.9977** & 0.9932 & 0.9653 \\ Freq-Domain (FFT) & **0.9983** & 0.9975 & 0.8048 \\ \hline \hline \end{tabular} \end{table} TABLE II: Average true positive classification rates \(\bar{A}\) for different normalization strategies and input datatype. Analog to training with a single additional source class, this result is independent of the type of normalization and the sequential order. In contrast to the previous experiment from Table II, trained normalization performs best in this transfer learning setting. The highest generalization rate with trainset normalization was achieved after training with order 1 (O1), resulting in \(G_{\text{Tr, O1}}=0.9982\). In comparison, the highest generalization rate of class normalization at the end of order 2 (O2) achieved \(G_{\text{CI, O2}}=0.9578\). The highest measurement normalization based approach only achieves \(G_{\text{Me, O1}}=0.8672\) at the end of order 1. As for the baseline dataset, a possible explanation could be the lack of amplitude-related information after data preprocessing with these normalization methods. In practice, the model is expected to generalize from laboratory measurements to data of a monitored GIS with an unknown \(U_{\mathrm{i}}\) multiple. Therefore, assuming negligible effects of free-space path loss, it is recommended to select normalization methods that preserve the amplitude information in the data. Similar to Section III-A, it is crucial to evaluate model performance concerning the remaining source class measurements from Table I while assessing generalization on the withheld \(\text{Pa}^{+}_{1.5\cdot U_{\mathrm{i}}}\) data. When measurements of all source classes from Table I (except \(\text{Pa}^{+}_{1.5\cdot U_{\mathrm{i}}}\)) are included in \(D_{\mathrm{train}}\), the model classifies almost all test set measurements correctly. This is illustrated by the confusion matrix in Figure 7. For the sake of brevity, all combinations of normalization methods and training routines are not shown, as they achieved comparable results. In the depicted configuration, the average true positive rate over the available classes in the training set is \(\widetilde{A}_{\text{Tr, O1}}=0.9979\). When this score is averaged with the associated generalization rate \((G_{\text{Tr, O1}}=0.9982)\), the model attains an average final true positive rate of \(\tilde{A}_{\text{Tr, O1}}=0.9977\) for all measurements across all classes in our experiment. ### _Comparison to other methods_ Compared to other methods, our model classifies HVDC PDs originating from metallic particles on the gas-solid interface (insulator) (Pa) of a GIS with an average accuracy of 99.67%, which has not been reported in any other non-PSA-based DC PD classification study. In addition, our refined CNN/MLP-based architecture achieves an 1.29% higher average performance than the previous best model in the protrusion (Pr) detection task [16]. The study of Schober and Schichler [13] previously studied Pr and Pa classification, however their PSA-based method did not report individual classification rates for these PD types. In addition, the number of measurements available for each class is very limited, making any comparative analysis on their work even more difficult. Furthermore, our model is the first to differentiate between negative and positive polarities of \(\text{Pr}^{\pm}\) and \(\text{Pa}^{\pm}\) measurements, while achieving previously unreported generalization capabilities to untrained \(U_{\mathrm{i}}\) multiples. Moreover, we are the first to investigate the effect of normalization on the classification performance. The main contributions of our work to the field are summarized in Table III. Note that we excluded approaches based on conventional PSA [9, 10] and PDs under AC [19, 22, 23, 24] from this comparison, as they either depend on time consuming human expert evaluation or rely on measurements of related but different physical phenomena. ## IV Conclusion In this paper, we evaluate the performance and generalizability of a neural network-based method for the classification of DC partial discharges (PDs) at both negative and positive polarity. The ultra-high frequency (UHF) time- and frequency-domain PD signals are generated by conductive protrusions and particles on an insulator. Our study extends the literature on HVDC GIS PD classification in multiple aspects: Fig. 6: Model generalization rate \(G\) on the untrained \(U_{\mathrm{i}}\) multiplier \(\text{Pa}^{+}_{1.5\cdot U_{\mathrm{i}}}\) when additional data from a specific previously withheld source classes is included in the \(D_{\mathrm{train}}\). The issue is investigated for trainset (Tr), class (CI), and measurement (Me) normalization and two different sequential training orders. Fig. 7: Confusion matrix and generalization rate \(G_{\mathrm{Full}}\) (red) of the proposed model. The network was trained using the full trainset-normalized measurements of all available classes from Table I (except \(\text{Pa}^{+}_{1.5\cdot U_{\mathrm{i}}}\)), using the train sequence of order 1. * First, we report the first classification of UHF signals originating from particles adhering to the gas-solid interface (insulators) with an accuracy of \(99.67\)%, without the need for pulse sequence analysis (PSA). Additionally, our model outperforms all previously reported methods by \(1.29\)% on the UHF protrusion signal classification task, achieving an accuracy of \(99.99\)%. The use of frequency-domain signals achieved only a negligible average performance advantage of \(0.04\)% compared to the UHF time-domain signal classification model result which hardly justifies the computational cost of converting each measurement to the frequency-domain. * Second, we are the first to investigate the transfer learning ability of a UHF signal-based neural network PD classification model to generalize to measurements recorded under unknown DC voltage stress levels. Regardless of the training order, our model correctly classifies \(99.87\)% of the \(U_{\mathrm{i}}\) multiple signals not included in the train set as well as \(99.77\)% of all available PD test set measurements. * Third, we analyze the effect of preprocessing, including sample-, class-, and training set normalization on the performance of a PD classification model in HVDC GIS to mitigate the possible influence of free-space path losses between the sensor- and defect location. Our results show that methods partially eliminating amplitude information perform better in PD classification experiments, while amplitude-preserving methods slightly outperform in the \(U_{\mathrm{i}}\) multiple transfer learning task. In future work, the proposed approach should be applied to measurements of the remaining PD types in HVDC GIS such as insulator cavitys, free-moving particles, and material on free potential. We also suggest considering different sensor positions, electrode geometries, and spacing to enhance the methodology. Furthermore, it would be important to study the transfer learning capability on data from additional operating voltages to facilitate the practical application of the method. ## Acknowledgments This research was partly funded by the German Research Foundation (DFG, project number 379542208) and the German Federal Ministry of Economic Affairs and Climate Action (BMWK, reference: KK5056101KA0). The authors acknowledge the computing time through the ZIH at TU Dresden and the feedback from Thomas Linde and Carsten Knoll.
2309.01850
Uncertainty in AI: Evaluating Deep Neural Networks on Out-of-Distribution Images
As AI models are increasingly deployed in critical applications, ensuring the consistent performance of models when exposed to unusual situations such as out-of-distribution (OOD) or perturbed data, is important. Therefore, this paper investigates the uncertainty of various deep neural networks, including ResNet-50, VGG16, DenseNet121, AlexNet, and GoogleNet, when dealing with such data. Our approach includes three experiments. First, we used the pretrained models to classify OOD images generated via DALL-E to assess their performance. Second, we built an ensemble from the models' predictions using probabilistic averaging for consensus due to its advantages over plurality or majority voting. The ensemble's uncertainty was quantified using average probabilities, variance, and entropy metrics. Our results showed that while ResNet-50 was the most accurate single model for OOD images, the ensemble performed even better, correctly classifying all images. Third, we tested model robustness by adding perturbations (filters, rotations, etc.) to new epistemic images from DALL-E or real-world captures. ResNet-50 was chosen for this being the best performing model. While it classified 4 out of 5 unperturbed images correctly, it misclassified all of them post-perturbation, indicating a significant vulnerability. These misclassifications, which are clear to human observers, highlight AI models' limitations. Using saliency maps, we identified regions of the images that the model considered important for their decisions.
Jamiu Idowu, Ahmed Almasoud
2023-09-04T22:46:59Z
http://arxiv.org/abs/2309.01850v1
# Uncertainty in AI: Evaluating Deep Neural Networks on Out-of-Distribution Images ###### Abstract As AI models are increasingly deployed in critical applications, ensuring the consistent performance of models when exposed to unusual situations such as out-of-distribution (OOD) or perturbed data, is important. Therefore, this paper investigates the uncertainty of various deep neural networks, including ResNet-50, VGG16, DenseNet121, AlexNet, and GoogleNet, when dealing with such data. Our approach includes three experiments. First, we used the pretrained models to classify OOD images generated via DALL-E to assess their performance. Second, we built an ensemble from the models' predictions using probabilistic averaging for consensus due to its advantages over plurality or majority voting. The ensemble's uncertainty was quantified using average probabilities, variance, and entropy metrics. Our results showed that while ResNet-50 was the most accurate single model for OOD images, the ensemble performed even better, correctly classifying all images. Third, we tested model robustness by adding perturbations (filters, rotations, etc.) to new epistemic images from DALL-E or real-world captures. ResNet-50 was chosen for this being the best performing model. While it classified 4 out of 5 unperturbed images correctly, it misclassified all of them post-perturbation, indicating a significant vulnerability. These misclassifications, which are clear to human observers, highlight AI models' limitations. Using saliency maps, we identified regions of the images that the model considered important for their decisions. Out-of-Distribution, epistemic uncertainty, image classifiers, uncertainty quantification ## 1 Introduction Artificial intelligence (AI), especially deep neural networks (DNNs), has recorded significant growth in recent years. These tools, which mimic how our brains work, are now used in many applications from detecting tumors in medical images to analyzing satellite imagery for flood assessment, and facial recognition for security systems. But as AI use grows, so do our questions about it. A major concern is how these AI systems behave when they face "unusual situations" - circumstances where data deviates from the norm or has been intentionally altered. Out-of-distribution (OOD) data challenges our understanding of model reliability. It encompasses data samples that, while possibly related to the training set, exhibit unexpected variations or appear in unfamiliar contexts. When confronted with OOD data, models can make high-confidence yet incorrect predictions, often resulting in potentially risky or misinformed decisions [1]. Moreover, as AI is deeply integrated into applications with substantial real-world implications, the stakes become higher. In addition to OOD data, perturbed data represents another vital aspect. Perturbations can arise from different sources, be it environmental changes, intentional adversarial attacks, or inherent variations in the data capturing mechanism. It's crucial to assess how models react, adapt, and sometimes fail under these conditions [2]. Therefore, this research investigates the robustness of DNN architectures when exposed to both OOD and perturbed data. Through this, we aim to understand the inherent limitations and potential improvement areas for AI models in dealing with unusual or modified data. Furthermore, we believe that understanding the uncertainty and robustness of these models can pave the way for safer and more reliable AI applications in the future. ### Related Work A common thread in recent literature is the centrality of trust in the outputs provided by AI models. As noted by [3], while in silico models have accelerated drug discovery, the predictions made by these models are largely confined to a limited chemical space covered by their training set. Anything beyond this domain can be risky. However, by quantifying uncertainty, researchers can understand the reliability and confidence level of predictions. Similarly, despite the high performance of machine learning algorithms for skin lesion classification, real-world applications remain scarce due to the lack of uncertainty quantification in predictions, which may lead to misinterpretations [4]. In another study, [1] offered a thorough survey on OOD detection and highlighted the importance of establishing a unified framework. In the same vein, [5] addressed the poor generalization performance of convolutional neural networks (CNNs) in medical image analysis. They found that CNNs often failed to detect adversarial or OOD samples. By employing a Mahalanobis distance-based confidence score, their work indicated improved model performance and robustness. Also, [2] critiqued the state of uncertainty quantification methods in deep learning. They argued that while some OOD inputs can be detected with reasonable accuracy, the current approaches are still not wholly reliable for robust OOD detection. This resonates with [6] who conducted a systematic review of uncertainty estimation in medical image classification. They identified Monte-Carlo Dropout and Deep Ensembles as prevalent methods, highlighting the potential of collaborative settings between AI systems and human experts. ### Methods ### Experiment 1: Classify OOD images with pre-trained models Images were generated via DALL-E using search terms like "snail wearing a graduation cap, holding a diploma", "a chainsaw made out of flowers and leaves", etc. The ground truth labels for the five images are chainsaw, lion, snail, car, and dam. For the classification task, five pre-trained neural networks were selected; they include ResNet-50, VGG16, Densenet121, Alexnet, and GoogleNet. This selection captures the diversity of architectures, varying depths, and model complexity, while also representing different milestones in deep learning. For instance, AlexNet is an early forerunner in the field, GoogleNet is based on the inception module [7], Resnet pioneered the use of residual connections [8], VGG16 demonstrated the effectiveness of deeper networks (introducing nonlinearities to learn more complex patterns) [9], and Densenet leveraged dense convolutional networks, connecting layers in a _feed-forward fashion_[10]. The five images used for Experiment 1 are shown in Figure 1 below. Figure 1: Out-of-Distribution Images used for Experiment 1. ### Experiment 2: Build an uncertainty quantification ensemble The five neural networks selected earlier served as candidate models for ensemble construction. These members' independent predictions are combined through a committee consensus method. For the consensus method, two options were considered: probabilistic averaging and non-linear combining methods (i.e. majority and plurality voting). The models' predictions (Table 1) for one of the five images in section 3.1 provided a strong rationale for deciding which consensus method to choose. Here, implementing majority voting for the ensemble would not work, as no class has a clear majority (i.e., \(>\)50% of predictions). Plurality voting also proves ineffective, as two classes - chainsaw and wheelbarrow - have equal votes (two each). A possible approach is random selection between the two, but this is synonymous to an 'uninformed gambling' and undermines the principles of responsible \(\Lambda\)I. \(\Lambda\) better approach is to obtain the confidence scores of the predictions (e.g., probabilities). Therefore, given that there are cases where majority or plurality voting may fail and require the use of probabilities, isn't it more efficient to directly implement probabilistic averaging? This reasoning forms the basis for selecting probabilistic averaging as the consensus method for the ensemble. ### Experiment 3: Test the robustness of best performing model Four new epistemic images were generated via DALL-E, while one (cat) was photographed in real life. Some perturbations were then added to the images e.g. image rotation, filters, etc. The original and perturbed images were passed through ResNet-50, being the best performing model. ### Results and Discussion ### Experiment 1 The analysed models recorded different accuracy in classifying the test images. ResNet-50 correctly predicted all five images, DenseNet121 and GoogleNet each correctly classified four out of five, VGG16 classified three images accurately, and AlexNet only managed to correctly classify one image. The specific predictions made by each model is presented in Table 2. ResNet-50's superior accuracy may be linked to its residual connections, which improve gradient flow and feature learning [8]. Though VGG16, DenseNet121, and GoogleNet have varying depths and complexities, they record similar performance highlighting that deeper connections and inception modules may be playing key roles. The poor performance of AlexNet underscores the advancement in deep learning, with more recent neural networks exhibiting better generalization capabilities. Interestingly, the models' performance is consistent with their ImageNet accuracies: ResNet-50 (79.41%), DenseNet121 (74.98%), VGG16 (74.4%), and AlexNet (63.3%) [11]. In a nutshell, ResNet-50, with the highest ImageNet accuracy, is the only model to classify all test images correctly, while AlexNet, with the lowest accuracy, produced the least number of correct predictions. \begin{table} \begin{tabular}{c c} \hline **Model** & **Prediction** \\ \hline ResNet-50 & chainsaw \\ VGG16 & wheelbarrow \\ DenseNet121 & wheelbarrow \\ AlexNex & greenhouse \\ GoogleNet & chainsaw \\ \hline \end{tabular} \end{table} Table 1: Models’ Predictions for Chainsaw ### Experiment 2 The ensemble model classified all five images correctly, demonstrating its superiority over using the models individually. The average probabilities, variance, and entropy scores for each image were obtained to measure the ensemble uncertainty. Table 3 presents a ranking of the images, starting with the one for which the ensemble model exhibits the highest uncertainty. The ensemble model displayed its highest level of uncertainty when classifying the 'Snail' image, having an average probability of 0.223893, a variance score of 0.149749, and an entropy score of 4.408561. In contrast, the ensemble model showed the least uncertainty when classifying the 'Dam' image, with an average probability of 0.995, a variance score of 0.000045, and an entropy score of 0.043793. For comparison, the maximum entropy score for the ImageNet dataset with 1000 classes is 9.7. That is, the closer the entropy score is to 9.7, the higher the ensemble uncertainty. In the rare case of an ensemble model being 100% certain, the entropy score would be 0. ### Experiment 3 Table 4 presents the classification results obtained before and after perturbation was added to the images in Experiment 3. The teddy bear image, rotated by 180 degrees, was misclassified as a cowboy hat. It is likely that the model is interpreting the skateboard (that the teddy was standing on in the original image) as a hat. Also, adding a filter to the cat image led the model to misclassify it as a bucket. Meanwhile, a human is unlikely to make such a fatal error. The snail image, rotated 180 degrees, was misidentified as chocolate syrup. For this, it is possible that the model misinterpreted the snail and its tentacles as flowing chocolate. In addition, the model likely focused on the chainsaw's engine only when it misclassified the chainsaw image as a padlock. These experiments show that the model relies on some patterns or features in images to make classification and when such features are obscured or altered, the model struggles to make accurate classification. \begin{table} \begin{tabular}{l l l l l l} \hline \hline **Model** & **Chainsaw** & **Lion** & **Snail** & **Car** & **Dam** \\ \hline ResNet-50 & chainsaw & Lion & snail & station wagon & dam \\ VGG16 & wheelarrow & Lion & slug & station wagon & dam \\ DenseNet121 & wheelarrow & Lion & snail & station wagon & dam \\ AlexNet & greenhouse & chow chow & mousetrap & Tent & dam \\ GoogleNet & chainsaw & chow chow & snail & station wagon & dam \\ \hline \hline \end{tabular} \end{table} Table 2: Classification of the five selected images by each model \begin{table} \begin{tabular}{l l l l l} \hline \hline **Ground Truth** & **Ensemble** & **Avg Probability** & **Variance** & **Entropy** \\ \hline Snail & Snail & 0.223893 & 0.149749 & **4.408561** \\ Car & station wagon & 0.340443 & 0.101638 & 3.306526 \\ Lion & Lion & 0.416775 & 0.277496 & 2.781448 \\ Chainsaw & Chainsaw & 0.423301 & 0.316176 & 2.560379 \\ Dam & Dam & 0.995382 & 0.000045 & **0.043793** \\ \hline \hline \end{tabular} \end{table} Table 3: Ranking of images starting with one with most ensemble uncertainty ### Analysis of saliency maps for the images Saliency methods offer valuable visual explanations on the inner workings of neural networks by identifying the most critical parts of an input image that contribute to a model's classification decision, thereby improving model's interpretability. These methods can be gradient-based (e.g. SmoothGrad, Vanilla Grad, GradCAM, etc.) or occlusion- and perturbation-based. The SmoothGrad method was selected for this study. With the saliency maps, we obtained additional helpful information in understanding the part of the images considered by the model during classification. For example, the lion image's saliency map showed that the model gets the classification right by focusing on the lion itself, while ignoring the water, ship, and coffee in the image. In contrast, the chainsaw image was misclassified as a padlock, with the map revealing that the model concentrated mainly on the chainsaw's engine, neglecting the saw. When examining the cat images, the saliency maps indicated that the model correctly classified the original image by focusing on the entire cat. However, in the perturbed image, the model's focus shifted to only a portion of the cat's body, leading to a misclassification as a bucket. Lastly, with the snail wearing a graduation hat, the model incorrectly identified it as a rhinoceros beetle. The saliency map shows that the model might have mistaken the hat's edge as a beetle's horn or hind leg. The saliency maps for the Ensemble mode's classification of images (in Experiment 2) are presented in Figure 2. The saliency maps for the ResNet-50's classification of images (in Experiment 3) are presented in Figure 3. The model's classification of the original and the perturbed images are stated under each image. \begin{table} \begin{tabular}{l l l l} \hline **Ground Truth** & **Original class** & **Perturbation added** & **Perturbed class** \\ \hline Cat & tabby cat & Filter & Bucket \\ Chainsaw & chainsaw & Filter & padlock \\ Teddy bear & teddy bear & Rotation & cowboy hat \\ Lion & Lion & Rotation & Mask \\ Snail & rhinoceros beetle & Rotation & chocolate syrup \\ \hline \end{tabular} \end{table} Table 4: ResNet-50’s classification results before and after perturbation Figure 2: Saliency maps for Ensemble’s classification (Experiment 2) ### Conclusion and Future Research Image classifiers can be an enabler of the SDGs e.g. detecting tumors in medical images, analyzing satellite imagery for flood assessment, facial recognition for security systems, etc [12, 13]. However, they could also act as inhibitors to the SDGs e.g. bias against certain groups (Google algorithm classifying African-American men as gorillas [14]), generation of DeepFake images, and vulnerability to adversarial attacks. With minor perturbations, the RestNet50 model in this study made a 180-degree turn to misclassify chainsaw as padlock and cat as bucket. Therefore, it is critical to build image Figure 3: Saliency maps for ResNet-50’s classification before and after perturbation classifiers that are robust to adversarial attacks or distribution shifts. Aside from adversarial training, some strategies are emerging in literature e.g. defensive distillation, gradient masking, Bayesian uncertainty estimation, and feature squeezing [15]. Future research should consider incorporating some of these strategies to improve model robustness in the face of out-of-distribution data.
2305.10450
Understanding of Normal and Abnormal Hearts by Phase Space Analysis and Convolutional Neural Networks
Cardiac diseases are one of the leading mortality factors in modern, industrialized societies, which cause high expenses in public health systems. Due to high costs, developing analytical methods to improve cardiac diagnostics is essential. The heart's electric activity was first modeled using a set of nonlinear differential equations. Following this, variations of cardiac spectra originating from deterministic dynamics are investigated. Analyzing a normal human heart's power spectra offers His-Purkinje network, which possesses a fractal-like structure. Phase space trajectories are extracted from the time series electrocardiogram (ECG) graph with third-order derivate Taylor Series. Here in this study, phase space analysis and Convolutional Neural Networks (CNNs) method are applied to 44 records via the MIT-BIH database recorded with MLII. In order to increase accuracy, a straight line is drawn between the highest Q-R distance in the phase space images of the records. Binary CNN classification is used to determine healthy or unhealthy hearts. With a 90.90% accuracy rate, this model could classify records according to their heart status.
Bekir Yavuz Koc, Taner Arsan, Onder Pekcan
2023-05-16T19:52:40Z
http://arxiv.org/abs/2305.10450v1
# Understanding of Normal and Abnormal Hearts by ###### Abstract Cardiac diseases are one of the leading mortality factors in modern, industrialized societies, which cause high expenses in public health systems. Due to high costs, developing analytical methods to improve cardiac diagnostics is essential. The heart's electric activity was first modeled using a set of nonlinear differential equations. Following this, variations of cardiac spectra originating from deterministic dynamics are investigated. Analyzing a normal human heart's power spectra offers His-Purkinje network, which possesses a fractal-like structure. Phase space trajectories are extracted from the time series electrocardiogram (ECG) graph with third-order derivate Taylor Series. Here in this study, phase space analysis and Convolutional Neural Networks (CNNs) method are applied to 44 records via the MIT-BIH database recorded with MLII. In order to increase accuracy, a straight line is drawn between the highest Q-R distance in the phase space images of the records. Binary CNN classification is used to determine healthy or unhealthy hearts. With a 90.90% accuracy rate, this model could classify records according to their heart status. Electrocardiogram, Phase Space, Convolutional Neural Networks. ## 1 Introduction It is well known that the human heart has an electrical activity that can be detected by measuring the potential difference from various points on the body. Electrocardiogram (ECG), which measures electric potential versus time, has three distinct parts. P -wave is associated with the excitation of the atria, the QRS complex presents the ventricles (His-Purkinje network), and T -wave shows the recovery of the initial electrical state of ventricles (Figure 1a). Although ECG presents periodic behavior, some irregularities can be observed in the details of the records. In fact, these irregularities belong to the intrinsic part of heart activity and/or to the random noise that can be found in such systems. These activities in ECG spectra help us to understand cardiac dynamics. Cardiac oscillations are sometimes perturbed by unpredictable contributions, which are part of the cardiac dynamics and, therefore, physiologically important. Due to these reasons, the heart is not a perfect oscillator and/or cardiac muscles do not continuously vibrate harmonically. For researchers, transforming a sequence of values in time to a geometrical object in space is a highly considered topic [1]. This procedure replaces an analysis over time with an analysis in space. Space is considered as phase space and the transforming of time series into an object called an embedding (Figure 1b). After this step, topological properties of the object are determined based on its fractal dimension. In this fractal dimension, not the original time series but the fractal dimension characterizes the properties of the phase space set. In the phase space set, the measured dimension determines whether a data set is generated by random or deterministic processes. A large fractal dimension value indicates the time series' random generation. The number of variables and equations is immense, and there are no ways to predict the future values from the early parameters. This shows that the multiplicity of interacting factors precludes the possibility of understanding how the underlying mechanisms work. On the other hand, a low fractal dimension value indicates that the data is generated by deterministic mechanisms based on a small number of independent variables, which helps to understand how the values in the past can be used to predict the values in the future. The system would be considered chaotic if the mechanism generating the data is deterministic, but the time series behaves like generated by random processes. A chaotic system is deterministic but not predictable in the long range. In a deterministic approach that is not chaotic, the value of a variable at a given time can be used to always generate the value of that variable in the future. At early times, the heart's electric activity was modeled using a set of nonlinear differential equations [2-4]. Afterward, variations of cardiac spectra originating from deterministic dynamics called "chaotic dynamics" highly sensitive to the heart's initial conditions are analyzed [5-6]. Analyzing the power spectra of a normal human heart shows that the His-Purkinje network possesses a fractal-like structure [7], and the existence of chaotic behavior of the heart can be found from the phase space picture by evaluating a fractal dimension, \(D\), where phase space trajectories are extracted from the time series graph of ECG [8]. Lower values of \(D\) indicate more coherent dynamics. If \(D=1\), the oscillation is periodic, and the phase space picture shows a limiting cycle. However, \(D\) becomes larger than one when a limiting cycle is perturbed by random noise [9]. \(D\) has non-integer values greater than two when the system becomes chaotic or strange attractor [9]. In this case, although trajectories in time do not converge towards a limiting cycle, they stay in a bounded region in the phase space. It is also shown that a short-term heart rate variability analysis yields a prognostic value in risk stratification, independent of clinical and functional variables [10]. However, the detailed description and classification of dynamical changes using time and frequency measures Figure 1: PQRST in ECG and phase space. (a) ECG (b) Phase space. are often insufficient, especially in dynamical diseases characterized by Mackey and Glass [11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 222, 231, 232, 241, 242, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 282, 285, 286, 288, 287, 289, 288, 289, 280, 281, 282, 283, 285, 289, 286, 287, 288, 289, 280, 283, 281, 284, 285, 287, 288, 289, 281, 285, 289, 282, 286, 283, 287, 288, 289, 280, 284, 285, 289, 286, 287, 288, 289, 281, 282, 289, 280, 281, 283, 285, 282, 286, 287, 288, 289, 281, 288, 289, 282, 283, 285, 289, 286, 287, 288, 289, 280, 281, 282, 283, 284, 286, 288, 289, 282, 285, 287, 289, 288, 289, 280, 282, 289, 283, 284, 285, 286, 287, 288, 289, 280, 289, 281, 282, 285, 286, 288, 289, 287, 289, 288, 289, 280, 281, 282, 283, 285, 284, 286, 287, 288, 289, 282, 289, 280, 283, 285, 286, 287, 288, 289, 281, 289, 282, 284, 285, 286, 287, 288, 289, 280, 289, 281, 285, 286, 287, 288, 289, 282, 288, 289, 283, 286, 289, 287, 288, 289, 280, 284, 288, 285, 289, 281, 286, 287, 288, 289, 282, 289, 283, 285, 286, 287, 288, 289, 289, 280, 281, 282, 286, 288, 287, 289, 288, 289, 280, 282, 283, 284, 285, 286, 288, 287, 289, 288, 289, 280, 284, 288, 289, 280, 285, 286, 287, 288, 289, 281, 288, 289, 282, 28, 281, 283, 286, 289, 284, 287, 288, 289, 285, 289, 286, 287, 288, 289, 289, 280, 281, 283, 288, 282, 285, 286, 289, 287, 288, 289, 289, 280, 284, 289, 281, 285, 286, 287, 288, 289, 288, 289, 281, 289, 282, 28, 283, 289, 284, 285, 286, 287, 288, 289, 289, 280, 281, 282, 288, 289, 283, 285, 286, 287, 288, 289, 280, 289, 281, 282, 283, 288, 28, 284, 285, 286, 287, 288, 289, 288, 289, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 280, 289, 282, 283, 286, 288, 289, 284, 285, 289, 286, 287, 288, 289, 28, 289, 28, 281, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 2, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 2, 28, 28, 28, 28, 28, 28, 28, 28, 28, 28, 2, 28, 28, 28, 28, 28, 2, Additionally, a morphological descriptor is implemented in the study. It depends on the sample and amplitude distance by calculating Euclidean distance in the R-peak and the other four waves of the beat. The study reported that the best parameters were adjusted, and the models were retrained with a training set and tested with a testing set. It is specified that due to the unbalanced dataset, the mean or overall accuracy did not show the true performance of the classifier. On top of that, it is indicated that if the classifier is assigned to a normal testing class, it is expected that it would classify with at least eight-nine percent overall accuracy [16]. Nowadays, it is well understood that cardiac diseases are one of the main reasons of mortality in modern, industrialized societies, and they cause high expenses in public health systems. Due to high costs, it is essential to develop analytical methods to improve cardiac diagnostics. In this work, we investigated 44 ECG recordings taken from abnormal and normal hearts, and CNN was applied for binary classification. We aim to classify healthy and unhealthy hearts by converting the ECG recordings into phase spaces in the third-order derivate Taylor series. The rest of the paper is organized as follows. Section 2 introduces the algorithms and describes the dataset used in this study. Results obtained from phase space diagrams of healthy and unhealthy records are given and discussed in Section 3. The conclusions, Section 4, complete our paper. ## 2 Methodology ### Phase Space Approach We construct the phase space based on heart voltage values over time (i.e., \(f(t)\)) and their first derivative (i.e., \(\frac{df(t)}{dt}=f^{\prime}(t)\)) (see Figure 1b). We know the function \(f(t)\), but we need to obtain \(f^{\prime}(t)\). When we sketch the diagram of \(f(t)\) and \(f^{\prime}(t)\), we can construct the phase space graph. As described in [17-19], the simple approximation of \(f^{\prime}(x)\), the first derivative of a function \(f\) at a point \(x\) is specified as the limit of a difference quotient as follows: \[f^{\prime}(x)=\lim_{h\to 0}\frac{f(x+h)-f(x)}{h} \tag{1}\] If \(h\ >\ 0\), meaning that h is a finite positive number, then \[f^{\prime}(x)=\frac{f(x+h)-f(x)}{h} \tag{2}\] is called the first-order forward difference approximation of \(f^{\prime}(x)\). The approximation of \(f^{\prime}(x)\) can be obtained by combining Taylor series expansions. If the values \((x_{i+1}\,f_{i+1}\ ),(x_{i+2}\,f_{i+2}\ )\) and \((x_{i+3}\,f_{i+3}\ )\) are known, the first-order derivative of \(f_{i}(x)\) can be calculated as shown by \(f^{{}^{\prime}}_{i}(x)\). \[f_{i+1}=f_{i}+\frac{f_{i}^{{}^{\prime}}}{1!}h+\frac{f_{i}^{{}^{ \prime\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! ### Convolutional Neural Networks In order to make classifications in the field of computer vision, Convolutional Neural Networks (CNN), which are inspired by human visual perceptions, have been discovered [20]. As shown in Figure 2, CNN can have one or more layers and increases its complexity with high-level features. CNN has weights coefficients that provide learning which is equivalent to classical neural networks. CNN has achieved great success, especially with problems with images. There are convolutional layers, a pooling layer, fully connected layers, and a training process in CNN [21]. #### 2.2.1 Convolutional Layer In the convolution layer, image or video information is extracted from the input data, as shown in Figure 3. Convolution works by learning image attributes and considering or preserving spatial relationships between pixels. If each image is considered a pixel matrix, convolution reduces data blocks to a smaller size than the given matrix. By moving the "feature detector" or "filter" matrix, the "Feature Map" or "Evolved Feature" matrix can be obtained (see Figure 3). This sequence is repeated when the input image is transformed into multiple feature map sets [23]. Figure 2: General Structure of Convolutional Neural Networks [22]. #### 2.2.2 Activation Functions and ReLU In CNN, the activation function is frequently used as Rectified Linear Unit (ReLU). This is the stage after the convolution process. ReLU is used for including nonlinearity in training. ReLU produces zero if a function gets a negative value, or it returns the positive value, as in Figure 4. #### 2.2.3 Pooling Layer When there is a lot of input data, the number of trainable records and the computational load can be redundant. This load can be reduced by reducing the width and height and keeping the input data's depth. Moreover, with the pooling layer, reducing the size provides information Figure 4: Rectified Linear Unit (ReLU). Figure 3: Feature map in the convolution layer [24]. loss and prevents the model from memorizing [25]. Mostly, max pooling is used in the industry. Max pooling is used by taking the highest, which is the maximum value, in each specified part of the attribute map [26]. \[\text{S}_{\text{i}}=\text{max}\;\text{a}_{\text{i}}\;(\text{i}\in\text{R}_{ \text{i}}) \tag{10}\] #### 2.2.4 Fully Connected Layers Fully Connected Layers are the last layer of the CNN model. In order to transform the CNN model into a classical artificial network, a flattening process needs to be used. The flattening process indicates that the concatenation and the final (or any) output of the convolutional neural network, i.e., a 3-dimensional matrix, the flattened output of the convolutional layer or final merging is transmitted as input to the fully connected layer that will be transformed into a vector as shown in Figure 5. Softmax or sigmoid activation functions are used in fully connected layers in classification determination. These functions will provide a non-negative value up to one in the output layer [27]. ### Model Training and Loss Function CNN aims to minimize the wrong classes in the test set. In order to achieve that, the weight coefficients are updated. Since the correct tags of training examples are known during the training of the network, the difference between the tags estimated by CNN (with the current Figure 5: Flattening in CNN. weight coefficients) and the actual class tags defined in the training set is calculated with the loss function. The parameters of neurons in convolutional and fully connected layers are updated with a gradient descent algorithm. With this algorithm, the class scores of the convolutional network can be calculated according to the tags in the training set for each image [21]. \[w_{ij}{}^{t+1}=w_{ij}{}^{t}-\alpha\left(dL/\ dw_{ij}\right) \tag{11}\] where \(w_{ij}\) is the weight coefficient of the \(j\) neuron in layer \(i\), \(t\) is the iteration step, \(\alpha\) is the learning rate, and \(L\) is the loss function [28]. ### Dataset MIT-BIH Database contains two-channel half-hour 48 extracts of ambulatory ECG recordings taken from 47 people by the BIH Arrhythmia Laboratory between 1975 and 1979. Each of the 48 recordings is nearly over 30 minutes. In 46 of the 48 recordings, the subjects were measured with the MLII signal obtained by placing the electrodes on the subject's chest. Record 102 and record 104 could not measure in the MLII signal; they are measured in the V5 signal. These two records are excluded from the database in this thesis due to this reason. Figure 6 presents the records for healthy (101) and unhealthy (210) people taken from MIT-BIH Database. In addition, the records with paced beats were not considered, which are record 107 and record 217. Figure 6: ECG signal of two different records. r101 Healthy, r210 Unhealthy. In the "Hearbeat classification fusing temporal and morphological information of ECGs via ensemble of classifiers" study which was published in 2019 by Mondejar-Guerra, ECG-based automatic classifications are combined with multi-ended SVMs. The database contains nearly 110.000 beats in total. The beats are grouped under SVEB (Supraventricular ectopic beat), VEB (Ventricular ectopic beat), F (Fusion), normal and unknown. On the other hand, in this study, records data are taken individually, and CNN methodology is applied for 47 records. In order to increase the success rate of the classification, a single line was drawn in the phase space images from the Q wave connected to the highest R wave. ## 3 Results and Discussions Phase Space Diagrams of Healthy and Unhealthy records are presented in Figure 7 and Figure 8, respectively, for the CNN analysis, where Keras and Tensorflow libraries are used in Python to implement the CNN model. Figure 7: Phase Space Diagram of Healthy Records: Healthy Records, Phase Space Diagram: 101, 103, 112, 113, 115, 117, 121, 122, 123, 230, and 234. Figure 8: Phase Space Diagram of Unhealthy Records: Unhealthy Records, Phase Space Diagram: 100, 105, 106, 108, 109, 111, 114, 116, 118, 119, 124, 200, 201, 202, 203, 205, 207, 208, 209, 210, 212, 213, 214, 215, 219, 220, 221, 222, 223, 228, 231, 232 and 233. Input shape of 64 x 64 x 3 feature vectors were fed to the convolution layers followed by the pooling layer, where the padding features were kept the same as the input to the pooling layer. Dense layers are used to minimize the size of the output vector gradually. The data augmentation method is used to create more pictures per epoch (see Figure 9). Zoom range, shear range, and horizontal flip parameters are adjusted with the Keras library. Two convolutional layers and max pooling are used related to the ReLU activation function. One hundred twenty-eight neurons are used in the hidden layer, and one neuron is used for the output layer due to binary CNN classification. One hundred seventy-five epochs are used to train the model. The training set includes 33 records, and the test set consists of 11 records (Table 1). The training set ratio is 75%, and the test set ratio is 25%. The Healthy test set includes three different records, and the training set contains eight different records. 27.27% of healthy records exist in the testing set and 72.72% in the training set. The disease test set includes eight disease records, and the training set contains 25. There 24.25% of disease records exist in the test, and 75.75% exist in the training set. The model generally results in higher accuracy in training and testing sets. Although there is an outlier epoch performance in the 105th epoch after the 75th epoch, the training set accuracy and testing set accuracy increase with higher parallel performance (Figure 10). Figure 9: The architecture of CNN model. It can be seen that the training and testing loss are parallel. The results show that the training ratio and data augmentation prevented the overfitting problem (see Figure 11). Therefore, the model can train itself to classify the training dataset and accurately classify the test dataset. As a result, model accuracy improves in the training and test sets with a well-trained model to classify healthy hearts and diseased hearts. \begin{table} \begin{tabular}{|c|c|c|} \hline **SET** & **Training Set** & **Testing Set** \\ \hline Healthy & 101, 113, 115, 117, & 103, 112, 234 \\ records & 121, 122, 123, 230 & 103, 112, 234 \\ \hline & 106, 108, 109, 114, & \\ & 116, 118, 119, 124, & \\ Unhealthy & 201, 203, 205, 207, & 100, 105, 111, 200, \\ records & 208, 209, 214, 215, & 202, 210, 212, 213 \\ & 219, 220, 221, 222, & \\ & 223, 228, 231, 232, 233 & \\ \hline \end{tabular} \end{table} Table 1: Training Set and Test Set Figure 10: Phase Space model loss comparison of training set and testing set in altered dataset. Among 11 test records, the model can correctly classify 10 test records with only one faulty record. Healthy test set accuracy is 100%. Among three healthy records, the model can classify all the records correctly. On the other hand, the disease test set accuracy is 87.5%. The model can correctly classify all records except r100 among eight disease records. This record is classified under a healthy record, although it is an unhealthy heart. Figure 12 summarizes the output of the phase space results with altered data records. Figure 11: Phase Space accuracy comparison of training set and testing set in altered dataset Figure 12: Phase Space model outputs of training set and testing set in altered dataset As shown in Table 2, it can be observed that model can classify health records with the best accuracy reaching almost 100% in the test set. The model has 87.5% accuracy that classifies disease records. There are no overfitting problems reported in this model training. The training accuracy and testing accuracy are similar, with very high accuracy rates. ## 4 Conclusion After the transformation of the data into phase diagrams, despite the insufficient number of data, this model can achieve remarkable accuracy rates. The model can classify healthy and unhealthy heart image records with 90.90% accuracy using only 44 images and the Data Image Generator feature. Overfitting problems are inevitable when the number of data is limited. CNN can extract essential features and make valuable distinctions between healthy and unhealthy hearts as a result of transforming the data into phase diagrams. The overfitting problem is outperformed, and the model obtains a healthy learning process. The model holds an excellent performance for the determination of the classification of healthy hearts. The three records (r103, r112, and r234) are classified correctly. Furthermore, the model learned the essential features of healthy hearts and clearly classified unhealthy hearts. The results show the model's success with 87.50% accuracy rates in the classification performance of unhealthy hearts. \begin{table} \begin{tabular}{|c|c|} \hline **Model** & **Accuracy** \\ \hline **Training Set** & 96.97\% \\ \hline **Test Set** & 90.90\% \\ \hline **Healthy Test Set** & 100.00\% \\ \hline **Disease Test Set** & 87.50\% \\ \hline \end{tabular} \end{table} Table 2: Results of the Model Finally, it may be concluded that more than 2 layers in convolutional layer can be included to improve the model design and increase the accuracy. In addition, more hidden layers can be added to update weights to classify the outcome.
2301.08918
Improving Signed Propagation for Graph Neural Networks in Multi-Class Environments
Message-passing Graph Neural Networks (GNNs), which collect information from adjacent nodes achieve dismal performance on heterophilic graphs. Various schemes have been proposed to solve this problem, and propagating signed information on heterophilic edges has gained great attention. Recently, some works provided theoretical analysis that signed propagation always leads to performance improvement under a binary class scenario. However, we notice that prior analyses do not align well with multi-class benchmark datasets. This paper provides a new understanding of signed propagation for multi-class scenarios and points out two drawbacks in terms of message-passing and parameter update: (1) Message-passing: if two nodes belong to different classes but have a high similarity, signed propagation can decrease the separability. (2) Parameter update: the prediction uncertainty (e.g., conflict evidence) of signed neighbors increases during training, which can impede the stability of the algorithm. Based on the observation, we introduce two novel strategies for improving signed propagation under multi-class graphs. The proposed scheme combines calibration to secure robustness while reducing uncertainty. We show the efficacy of our theorem through extensive experiments on six benchmark graph datasets.
Yoonhyuk Choi, Jiho Choi, Taewook Ko, Chong-Kwon Kim
2023-01-21T08:47:22Z
http://arxiv.org/abs/2301.08918v7
# Is Signed Message Essential for Graph Neural Networks? ###### Abstract Message-passing Graph Neural Networks (GNNs), which collect information from adjacent nodes, achieve satisfying results on homophilic graphs. However, their performances are dismal in heterophilous graphs, and many researchers have proposed a plethora of schemes to solve this problem. Especially, flipping the sign of edges is rooted in a strong theoretical foundation, and attains significant performance enhancements. Nonetheless, previous analyses assume a binary class scenario and they may suffer from confined applicability. This paper extends the prior understandings to multi-class scenarios and points out two drawbacks: (1) the sign of multi-hop neighbors depends on the message propagation paths and may incur inconsistency, (2) it also increases the prediction uncertainty (e.g., conflict evidence) which can impede the stability of the algorithm. Based on the theoretical understanding, we introduce a novel strategy that is applicable to multi-class graphs. The proposed scheme combines confidence calibration to secure robustness while reducing uncertainty. We show the efficacy of our theorem through extensive experiments on six benchmark graph datasets. ## 1 Introduction The increase in graph-structured datasets has led to rapid advancements in graph mining techniques including random walk-based node embedding and graph neural networks (GNNs). Especially, GNNs provide satisfactory performances in various applications including node classification and link prediction. The main component of GNNs is message-passing [1], where the information is propagated between nodes and then aggregated. Also, the integration of a structural property with the node features enhances the representation and the discrimination powers of GNNs substantially [1, 13, 14]. Early GNN schemes assume the network homophily where nodes of similar attributes make connections with each other based on the selection [12] or social influence theory [14]. Plain GNN algorithms [15, 16] simply perform Laplacian smoothing (a.k.a low-pass filtering) to receive low-frequency signals from neighbor nodes. Consequently, these methods fail to adequately deal with heterophilous graphs [13, 15, 14] such that even a simple MLP outperforms GNN in some cases. To relieve this problem, a plethora of clever algorithms have been proposed including the adjustment of edge coefficients [13, 12, 1, 1], aggregation of remote nodes with high similarity [18, 15], and diversified message propagation [16]. However, the majority of prior schemes [12] stipulate certain conditions of advantageous heterophily and these constraints undermine their generality and applicability. Recently, some bodies of work allow the edge coefficients to be negative [15, 1] to preserve high-frequency signal exchanges between neighbors. Further, from the perspective of gradient flow, [1], Di Giovanni _et al._2022] shows that negative eigenvalue preserves the high-frequency signals to dominate during propagation. [1] introduces sheaf to enhance the linear separability of neural networks. Instead of changing the signs of edges, others [11, 13] assign zero-weights to disassortative connections precluding message diffusion on such edges. Here, there arises a question: _does signed messaging always yield better results than assigning zero-weights on heterophilic edges?_ To answer the above question, we conduct an empirical study and illustrate its results in Figure 1. Along with this, we aim to establish theoretical properties to compare their discrimination power. For this, recent studies [12, 13] scrutinize the changes in node features before and after message reception. Here, they provide some useful insights into using signed messages based on the node's relative degree and its homophily ratio. Nonetheless, prior analyses were confined to binary class graphs, which may detriment their applicability to generic graphs. In this paper, we extend the theorem to a multi-class scenario positing that the blind application of signed messages to multi-class graphs may increase the uncertainty of predictions. Throughout this analysis, we suggest employing confidence calibration [1, 16] which is simple yet effective to enhance the quality of predictions. To summarize, our contributions can be described as follows: * Contrary to prior work confined to a binary class, we tackle the signed messaging mechanism in a multi-class scenario. Our work provides fundamental insight into using signed messages and establishing the theoretical background for the development of powerful GNNs. * We conjecture and prove that signed messages escalate the inconsistency between neighbors and increase the uncertainty in predictions. Based on this understanding, we propose a novel uncertainty reduction method using confidence calibration. * We conduct extensive experiments on six benchmark datasets to validate our theorems and show the effectiveness of confidence calibration. ## 2 Related Work **Graph Neural Networks (GNNs).** Under semi-supervised settings, GNNs have shown great potential by utilizing the information of adjacent nodes. Early GNN studies [1, 1] focused on the spectral graph analysis (e.g., Laplacian decomposition) in a Fourier domain. However, they suffer from large computational costs as the scale of the graph increases. GCN [13] reduced the overhead by harnessing the localized spectral convolution through the first-order approximation of a Chebyshev polynomial. Another notable approach is spatial-based GNNs [2, 1] which aggregate information in a Euclidean domain. Early spatial techniques became a stepping stone to many useful schemes that encompass relevant remote nodes as neighbors. **GNNs on heterophilous graphs.** Traditional message-passing GNNs fail to perform well in heterophilic graphs [14]. To redeem this problem, recent studies have paid attention to the processing of disassortative edges [1, 15]. They either capture the difference between nodes or incorporate distant but similar nodes as neighbors. For example, H\({}_{2}\)GCN [13] separates ego and neighbors during aggregation. SimP-GCN [12] suggests a long-range adjacency matrix and EvenNet [11] receives messages from even-hop away nodes only. Similarly, [11] selects neighbors from the nodes without direct connections. Configuring path-level pattern [20] for finding a compatibility matrix [13] has also been proposed. Another school of methodologies either changes the sign of disassortative edges from positive to negative [1, 1, 14, 15] or assigns zero-weights to disassortative edges [1]. Even though these schemes show their effectiveness [1] on binary classes, it may require further investigations before extending their applications to a multi-class scenario. ## 3 Preliminaries In this section, let us first define the notations and then explain the basics of the problem. **Notations.** Let \(\mathcal{G}=(\mathcal{V},\mathcal{E},X)\) be a graph with \(|\mathcal{V}|=n\) nodes and \(|\mathcal{E}|=m\) edges. The node attribute matrix is \(X\in\mathbb{R}^{n\times F}\), where \(F\) is the dimension of an input vector. Given \(X\), the hidden representation of node features \(H^{(l)}\) (_l-th_ layer) is derived through message passing. Here, node \(i^{\prime}s\) feature is the row of \(h_{i}^{(l)}\). The structural property of \(\mathcal{G}\) can be represented by its adjacency matrix \(A\in\{0,1\}^{n\times n}\). Also, \(D\) is a diagonal matrix with node degrees \(d_{ii}=\sum_{j=1}^{n}A_{ij}\). Each node has its label \(Y\in\mathbb{R}^{n\times C}\), where \(C\) represents the number of classes. The goal of semi-supervised node classification is to predict the class of unlabeled nodes \(\mathcal{V}_{U}=\{\mathcal{V}-\mathcal{V}_{L}\}\subset\mathcal{V}\) given the partially labeled training set \(\mathcal{V}_{L}\). The global edge homophily ratio (\(B\)) is defined as: \[B\equiv\frac{\sum_{(i,j)\in\mathcal{E}}\mathbb{1}\left(Y_{i}=Y_{j}\right)}{| \mathcal{E}|}. \tag{1}\] Likewise, the local homophily (\(b_{i}\)) of node \(i\) is given as: \[b_{i}\equiv\frac{\sum_{j=1}^{n}A_{ij}\cdot\mathbb{1}\left(Y_{i}=Y_{j}\right)}{ d_{ii}}. \tag{2}\] **Empirical Analysis.** Vanilla GNNs provide dismal performances in heterophilic datasets, where most edges connect two nodes with different labels. Consequently, finding proper coefficients of entire edges became essential to enhance the overall quality of GNNs. In Fig. 1, we evaluate the node classification accuracy of GCN [13] using six benchmark graphs (the statistical details are shown in Table 1). From the original graph (vanilla GCN), we fabricate two graph variants; one that replaces disassortative edges with -1 (signed GCN), and the other that assigns zero-weights on heterophilous connections (zero-weight GCN). As illustrated in Fig. 1, the zero-weight GCN achieves the best performance, followed by the signed GCN. The detailed explanations regarding this phenomenon will be explained in Section 4.2 and 4.3. ## 4 Theoretical Analysis We first discuss the mechanism of Message-Passing Neural Networks (MPNN) and the impact of using signed messages (SS 4.1). Then, we introduce the previous analysis of employing signed propagation on binary class graphs (SS 4.2). Figure 1: Node classification accuracy on six benchmark datasets. Firstly, vanilla GCN utilizes the original graph. The coefficient of heterophilous edges is changed to -1 in signed GCN and to 0 in zero-weight GCN, respectively Through this, we extend them to a multi-class scenario and point out some drawbacks under this condition (SS 4.3). Finally, we suggest a simple yet effective solution to improve the quality of signed GNNs through the integration of calibration (SS 4.4). ### Message-Passing Neural Networks **Mechanism of Graph Neural Networks (GNNs).** Generally, most of the GNNs employ the strategy of propagation and then aggregation, where the node features are updated iteratively. This can be represented as follows: \[H^{(l+1)}=\phi(\bar{H}^{(l+1)}),\ \ \bar{H}^{(l+1)}=AH^{(l)}W^{(l)}. \tag{3}\] \(H^{(0)}=X\) is the initial vector and \(H^{(l)}\) is nodes' hidden representations at the _l-th_ layer. \(\bar{H}^{(l+1)}\) is retrieved through message-passing (\(A\)) and we obtain \(H^{(l+1)}\) after an activation function \(\phi\) (e.g. ReLU). \(W^{(l)}\) is the trainable weight matrices that are shared across all nodes. The final prediction is produced by applying cross-entropy \(\sigma(\cdot)\) (e.g., log-softmax) to \(\bar{H}^{(L)}\) and the loss function is defined as: \[\mathcal{L}_{GNN}=\mathcal{L}_{nll}(Y,\widehat{Y}),\ \widehat{Y}=\sigma(\bar{H}^{(L )}). \tag{4}\] The parameters are updated by computing negative log-likelihood loss \(\mathcal{L}_{nll}\) between the predictions (\(\widehat{Y}\)) and true labels (\(Y\)). Most GNN schemes assume that graphs are assortative and they construct the message-passing matrix (\(A\)) with positive values to preserve the low-frequency information (local smoothing) [14]. Consequently, they fail to capture the difference between node features and achieve lower performance on the heterophilous networks [13, 15]. **Meaning of using signed messages.** Recent studies [13, 16, 15] emphasize the importance of high-frequency signals and suggest flipping the sign of disassortative edges from positive to negative to preserve such signals. We first show that they can also contribute to the separation of ego and neighbors. Let us assume an ego node \(i\) and its neighbor node \(j\) is connected with a signed edge. Let us ignore other neighbor nodes to concentrate on the mechanism of signed messaging. Applying GCN [10], we obtain the output of node \(i\) as: \[\widehat{Y}_{i}=\sigma(\bar{H}_{i}^{L})=\sigma(\frac{\bar{H}_{i}^{(L)}}{d_{i} +1}-\frac{\bar{H}_{j}^{(L)}}{\sqrt{(d_{i}+1)(d_{j}+1)}}). \tag{5}\] Assuming that the label of the ego (\(Y_{i}\)) is \(k\), we can calculate the loss (\(\mathcal{L}_{nll}\)) between a true label \(Y_{i}\in\mathcal{R}^{C}\) and a prediction \(\widehat{Y}_{i}\in\mathcal{R}^{C}\) as below: \[\mathcal{L}_{nll}(Y_{i},\widehat{Y}_{i})=-\log(\widehat{y}_{i,k}) \tag{6}\] Since the column-wise components of the last weight matrix \(W^{(L)}\) act as an independent classifier, we prove that the probability of node \(j\) being a class \(k\) (\(\widehat{y}_{j,k}\)), transitions in the opposite to the node \(i\)'s probability (\(\widehat{y}_{i,k}\)) as the training epoch (\(t\)) proceeds: \[\widehat{y}_{j,k}^{(t+1)}<\widehat{y}_{j,k}^{t},\ \widehat{y}_{i,k}^{(t+1)}> \widehat{y}_{i,k}^{t}, \tag{7}\] where \(\widehat{y}_{i,k}^{(t+1)}=\widehat{y}_{i,k}^{t}-\eta\bigtriangledown_{i} \mathcal{L}_{nll}(Y_{i},\widehat{Y}_{i})\) and \(\widehat{y}_{j,k}^{(t+1)}=\widehat{y}_{j,k}^{t}-\eta\bigtriangledown_{j} \mathcal{L}_{nll}(Y_{i},\widehat{Y}_{i})\). Notation \(\eta\) is the learning ratio and a symbol \(\bigtriangledown\) represents a partial derivative of the loss function. _Proof of Eq. 7 is in App. A.1._ **Motivation.** We prove that signed messages contribute to separate disassortative neighbor nodes. Nonetheless, in Fig. 1, signed GCN achieves inferior performance than zero-weighted GCN. In the following section, we analyze this phenomenon by developing our theorems from binary to multi-class scenarios. ### Using signed messages on binary classes In this section, we aim to analyze the movements of node features given three types of graphs (original, signed, and zero-weights). We again employ GCN [10] as a baseline. Here, we assume a binary classification task (\(y_{i}\in\{0,1\}\)) similar to previous work [1, 15] and inherit several useful notations for simplifications: (1) For all nodes \(i=\{1,...,n\}\), their degrees \(\{d_{i}\}\) and features \(\{h_{i}\}\) are i.i.d. random variables. (2) We assume that every class has the same population. (3) With a slight abuse of notation, assume \(h^{(0)}=XW^{(0)}\) is the first layer projection of initial node features. (4) Given the label \(y_{i}\), the node feature follows the distribution (\(\mu\) or \(-\mu\)) as: \[\mathbb{E}(h_{i}^{(0)}|y_{i})=\begin{cases}\mu,&\text{if }y_{i}=0\\ -\mu,&\text{if }y_{i}=1.\end{cases} \tag{8}\] Prior work [15] introduces Theorems 4.1, 4.2 using the local homophily (Eq. 2), message passing (Eq. 3), and expectation of node features (Eq. 8). Each theorem below utilizes the original and signed graph, respectively. **Theorem 4.1** (Binary class, vanilla GCN).: _Let us assume \(y_{i}=0\). Then, the expectation after a single-hop propagation \(\mathbb{E}(h_{i}^{(1)})\) is defined as:_ \[\mathbb{E}(h_{i}^{(1)}|y_{i},d_{i})=\frac{(2b_{i}-1)d_{i}^{\prime}+1}{d_{i}+1} \mathbb{E}(h_{i}^{(0)}|y_{i}), \tag{9}\] _where \(d_{i}^{\prime}=\sum_{j\in\mathcal{N}_{i}}\sqrt{\frac{d_{i}+1}{d_{j}+1}}\)._ _Proof is provided in App. A.2._ The generalized version of the above theorem is described in [15], which takes two distributions \(\mu_{0},\mu_{1}\) as: \[h_{i}\sim N(b_{i}\mu_{0}+(1-b_{i})\mu_{1},\frac{1}{\sqrt{d_{i}+1}}). \tag{10}\] Eq. 10 reduces to Eq. 9 when \(\mu_{1}=-\mu_{0}\). **Theorem 4.2** (Binary class, signed GCN).: _If the sign of heterophilous edges is flipped correctly under the error ratio (\(e\)), the expectation is given by:_ \[\mathbb{E}(h_{i}^{(1)}|y_{i},d_{i})=\frac{(1-2e)d_{i}^{\prime}+1}{(d_{i}+1)} \mathbb{E}(h_{i}^{(0)}|y_{i}). \tag{11}\] _Proof can be seen in App. A.3._ Referring to this analysis, we can induce the expectation of zero-weight GCN as below: **Theorem 4.3** (Binary class, zero-weight GCN).: _Similar to the Theorem 4.2, assigning zero weights to the heterophilous edges leads to the following feature distribution:_ \[\mathbb{E}(h_{i}^{(1)}|y_{i},d_{i})=\frac{(b_{i}-e)d_{i}^{\prime}+1}{(d_{i}+1)} \mathbb{E}(h_{i}^{(0)}|y_{i}). \tag{12}\] _Proof can be found in App. A.4._ For all theorems, if the coefficient of \(\mathbb{E}(h_{i}^{(0)}|y_{i})\) is smaller than 1, the node feature moves towards the decision boundary and message passing loses its discrimination power [20]. Based on this observation, we can compare the discrimination powers of signed and zero-weight GCNs. **Corollary 4.4** (Binary class, discrimination power).: _Omitting the overlapping part of Theorems 4.2 and 4.3, their difference, Z, can be induced by the error ratio (e) and homophily (\(b_{i}\)):_ \[Z=(1-2e)-(b_{i}-e)=1-e-b_{i}, \tag{13}\] _where \(0\leq e,b_{i}\leq 1\)._ We visualize \(Z\) in Fig. 2a. Note that the space is half-divided by the plane \(Z=0\) since \(\int_{0}^{1}\int_{0}^{1}(1-e-b)\,de\,db=0\). When \(b_{i}\) and \(e\) are small, \(Z\) becomes positive which indicates that signed GCN outperforms zero-weight GCN and vice versa. Now, let us assume that the error ratio is zero (\(e=0\)) identical to the settings of our previous analysis (Fig. 1). Under this condition, \(Z\)\((=1-b_{i})\) should be non-negative regardless of the homophily ratio (\(0\leq b_{i}\leq 1\)). However, Fig. 1 shows that zero-weight GCN generally outperforms signed GCN (\(Z\leq 0\)) contradicting the Corollary 4.4. Thus, we extend the above theorems to cover a multi-class scenario and point out the limitations in the previous analyses. ### Using signed messages on multiple classes Without loss of generality, one can extend the expectation of node features from a binary (Eq. 8) to multiple classes through polar coordinates as below: \[\mathbb{E}(h_{i}^{(0)}|y_{i})=(\mu,\frac{2\pi j}{C}),\ j=0,...,C-1. \tag{14}\] Here, \(\mu\) represents the scale of a vector. The direction of the vector is determined by its label \(j\). The above equation also satisfies the origin symmetry under the binary class (\(C=2\)) since \((\mu,0)=-(\mu,\pi)\). Through this equation, we can redefine Thm. 4.1 and 4.2 for multiple class GCNs. **Theorem 4.5** (Multi-class, signed GCN).: _Let us assume the label \(y_{i}=0\). For simplicity, we denote the coordinates of the ego \((\mu,\theta)\) as \(k\), and its neighbors \((\mu,\theta^{\prime})\) as \(k^{\prime}\), where \(\theta=0\) and \(\theta^{\prime}=\frac{2\pi j}{c}\neq 0\). Then, the expectation of \(h_{i}\) is defined as:_ \[\mathbb{E}(h_{i}^{(1)}|y_{i},d_{i})=\frac{(1-2e)\{b_{i}k+(b_{i}-1)k^{\prime}\} d_{i}^{\prime}+k}{d_{i}+1}. \tag{15}\] _Proof is provided in App. B.1._ **Theorem 4.6** (Multi-class, zero-weight GCN).: _Likewise, the \(h_{i}^{(1)}\) driven by zero-weight GCN is:_ \[\mathbb{E}(h_{i}^{(1)}|y_{i},d_{i})=\frac{\{(1-e)b_{i}k+e(1-b_{i})k^{\prime}\} d_{i}^{\prime}+k}{d_{i}+1}. \tag{16}\] _Proof is provided in App. B.2._ Similar to Corollary 4.4, we can compare the separability of the two methods based on their coefficients. **Corollary 4.7** (Multi-class).: _The difference of discrimination power (\(Z\)) between signed and zero-weight GCN in the multi-class case is:_ \[Z=-eb_{i}k+(1-e)(b_{i}-1)k^{\prime} \tag{17}\] _Then, we can induce the conditional statement as below based on the distribution of aggregated neighbors (\(k^{\prime}\)):_ \[Z\in\begin{cases}1-e-b_{i},&\text{if }k^{\prime}=-k\\ -2eb-(1-e-b_{i}),&\text{if }k^{\prime}=k.\end{cases} \tag{18}\] _More details can be found in App. B.3._ Fig. 2b plots \(Z\) for the multi-class case. The above corollary implies that if the distribution of aggregated neighbor is origin symmetry (\(k^{\prime}=-k\)), \(Z\) (\(=1-e-b\)) becomes identical to the Eq. 13. Under this condition, signed propagation might perform well. However, as \(k^{\prime}\) gets closer to \(k\), its discrimination power degrades (\(Z\) gets smaller) as shown in the blue areas in Fig. 2b, where \(\int_{0}^{1}\int_{0}^{1}(-2eb+e+b-1)\,dedb=-1\). Intuitively, the probability of being \(k^{\prime}=-k\) may decrease as the number of classes increases, which means that the zero-weight GCN generally outperforms the signed GCN in multi-class graphs. Based on this observation, we point out some limitations below. **Limitation of signed messages in multi-class graphs.** Now, we introduce two types of limitations (P1 and P2) that may degrade the overall quality of GNNs. **(P1) The sign of a message from multi-hop away nodes depend on the paths through which information is transmitted.** GNNs stack multiple layers to utilize the information of multi-hop away neighbors. Here, we assume three nodes \(i,j,k\) that are connected serially and the signs of their edges are \(s_{ij}\) and \(s_{jk}\), respectively. If \(y_{i}=y_{j}\), then \(s_{ij}=1\), otherwise \(s_{ij}=-1\). To begin, let us define a message from node \(k\) to \(i\) as below: \[h_{i}^{(2)}=h_{i}^{(1)}W^{(1)}+s_{ij}(h_{j}^{(0)}W^{(0)}+s_{jk}h_{k}^{(0)}W^{( 0)})W^{(1)}, \tag{19}\] Figure 2: We plot the Z to compare the discrimination power of signed and zero-weight GCNs. The red and blue colored parts indicate the regions where signed GCN and zero-weight GCN have better performance, respectively where we omit the degree scaling factor for simplicity. In the above equation, we can infer that the sign of \(h_{k}^{(0)}\) depends on the multiplication of \(s_{ij}\) and \(s_{jk}\). If the label is binary (e.g., 0 or 1), signed propagation does not degrade the overall quality since \(s_{ij}\cdot s_{jk}=1\) if \(i\) and \(k\) belong to the same class, and \(s_{ij}\cdot s_{jk}=-1\) otherwise. However, the problem occurs when employing multi-class datasets. For example, let us assume the labels of three nodes are \(y_{i}=0,y_{j}=1\), and \(y_{k}=2\). Even though \(i\) and \(k\) are in different classes, \(k\) is trained to be similar to \(i\) since \(s_{ij}\cdot s_{jk}=(-1)\times(-1)=1\). To solve this problem, one has to configure the classes of nodes on entire paths between two nodes and manually assign their signs. But this incurs high computational costs of \(O(\sum_{l=2}^{L}\sum_{i=1}^{N}\sum_{j=i}^{N}A_{ij}^{l})\). Thus, we propose to solve the second type of limitation (P2) as below. **(P2) Signed propagation increases the uncertainty of the prediction.** Adequate management of uncertainty is vital in machine learning to generate highly confident predictions [11, 12, 13]. This is closely related to the entropy (e.g., information gain [10]) and recent work [10] formulates two types of uncertainties: the _aleatoric_ and _epistemic_ caused by the data and the model, respectively. But here, we rather focus on the conflict evidence (_dissonance_) [13, 14], which ramps up the entropy of outputs. One can easily measure the uncertainty of a prediction (\(\widehat{y}_{i}\)) using Shannon's entropy [13] as: \[E(\widehat{y}_{i})=-\sum_{j=1}^{C}\widehat{y}_{i,j}log_{c}\widehat{y}_{i,j}. \tag{20}\] Furthermore, measuring dissonance (_diss_) is also important [14] as it is powerful in distinguishing Out-of-Distribution (OOD) data from conflict predictions [15] and improving classification accuracy: \[diss(\widehat{y}_{i})=\sum_{j=1}^{C}\left(\frac{\widehat{y}_{i}\sum_{k\neq j} \widehat{y}_{ik}(1-\frac{\|\widehat{y}_{ik}-\widehat{y}_{ij}\|}{\widehat{y}_{ ij}+\widehat{y}_{ik}})}{\sum_{k\neq j}\widehat{y}_{ik}}\right), \tag{21}\] which can be defined only for non-zero elements. We show that signed messages are helpful for ego and neighbor separation from Eq. 5 to 7. Now, we posit that neighbors connected with signed edges provoke higher entropy (e.g., \(E(\widehat{y}_{i})\) or \(diss(\widehat{y}_{i})\)) than the one with a plane or zero-weighted one. **Theorem 4.8**.: _Under multiple classes, the entropy gap between the signed neighbor \(E(\widehat{y}_{s})\) and plane (or zero) one \(E(\widehat{y}_{p})\) increases in proportion to the training epoch (\(t\))._ \[E(\widehat{y}_{s}^{(t+1)})-E(\widehat{y}_{p}^{(t+1)})>E(\widehat{y}_{s}^{t})- E(\widehat{y}_{p}^{t}). \tag{22}\] _Proof with an example can be seen in the App. C._ To summarize, signed messages contribute to the separation of two nodes (Fig 2(a)), while they also increase the uncertainty of neighboring nodes \(j,k\) that propagate signed information to an ego \(i\) (Fig. 2(b)). To deal with this, we employ confidence calibration which will be explained below. ### Methodology Previously, we single out two types of limitations. The optimal solution for (P1) is to configure the entire paths between two nodes and assign a proper sign. However, this might be intractable as the size of the graph increases. Instead, we propose a simple yet effective solution that can reduce the uncertainty (P2) through confidence calibration. The proposed method, free from entire path configuration, is cost-efficient and fairly powerful. Calibration is one type of self-training method [12, 13] that acts as a regularization term. Even though it has shown to be effective for generic GNNs [13], we notice that the performance gain is much greater when integrated with signed methods. Many algorithms can be used for calibration (e.g., temperature and vector scaling [12]). In this paper, our loss function is defined as, \[\mathcal{L}_{calib}=\frac{1}{n}\sum_{i=1}^{n}(-max(\widehat{y}_{i})+submax( \widehat{y}_{i})), \tag{23}\] where \(n=|\mathcal{V}_{valid}\cup\mathcal{V}_{test}|\) is the set of validation and test nodes. Our method is quite similar to prior work [13], but we do not utilize the label of validation sets for a fair comparison. As defined above, it penalizes the maximal and sub-maximal values to be similar in order to suppress the generation of conflict evidence. Since the calibration only utilizes the outputs \(\widehat{y}\), it has high scalability and is applicable to any type of GNNs. Integrating Eq. 23, the overall loss (\(\mathcal{L}_{total}\)) can be defined as below: \[\mathcal{L}_{total}=\mathcal{L}_{GNN}+\lambda\mathcal{L}_{calib}. \tag{24}\] To validate our analysis (SS 5.1), we employ several state-of-the-art methods to obtain \(\mathcal{L}_{GNN}\). The \(\lambda\) is a hyper-parameter that balances the influence of calibration. In Section 5.4, we conduct an experiment to select the \(\lambda\) that achieves the best validation score for each dataset. We describe the pseudo-code and time complexity of our algorithm in _App. D_. \begin{table} \begin{tabular}{c c c c c c c} \hline Datasets & Cora & Citeseer & Pubmed & Actor & Cham. & Squirrel \\ \hline \# Nodes & 2,708 & 3,327 & 19,717 & 7,600 & 2,277 & 5,201 \\ \# Edges & 10,558 & 9,104 & 88,648 & 25,944 & 33,824 & 211,872 \\ \# Features & 1,433 & 3,703 & 500 & 931 & 2,325 & 2,089 \\ \# Labels & 7 & 6 & 3 & 5 & 5 & 5 \\ \hline \end{tabular} \end{table} Table 1: Statistical details of six benchmark datasets Figure 3: (a) In binary class graphs, signed propagation contributes to the separation of nodes (\(i\), \(j\)) and reduces the entropy. (b) In multi-class graphs, the uncertainty of neighboring nodes that are connected with signed edges (\(j\), \(k\)) increases ## 5 Experiments We conducted extensive experiments to validate our theorems and to compare the performances of our method and baselines. We aim to answer the following research questions: * **Q1** Is the calibration alleviate the uncertainty issue when integrated with the signed GNNs? * **Q2** Do the signed messages increase the uncertainty of the final prediction? * **Q3** Is the number of classes correlated with the prediction uncertainty? * **Q4** How does the hyper-parameter \(\lambda\) in Eq. 24 affect on the performance? **Datasets.** The statistical details of datasets are in Table 1. (1) _Cora, Citeseer, Pubmed_(Kipf and Welling, 2016) are citation graphs, where a node corresponds to a paper and edges are citations between them. The labels are the research topic of the papers. (2) _Actor_(Tang _et al._, 2009) is a co-occurrence graph where actors and co-occurrences in the same movie are represented as nodes and edges, respectively. The labels are five types of actors. (3) _Chameleon, Squirrel_(Rozemberczki _et al._, 2019) are Wikipedia hyperlink networks. Each node is a web page and the edges are hyperlinks. Nodes are categorized into five classes based on monthly traffic. **Baselines.** We employ several state-of-the-art methods for validation: (1) Plane GNNs: GCN (Kipf and Welling, 2016), and APPNP (Klicpera _et al._, 2018). (2) GNNs for heterophilous graphs: GAT (Velickovic _et al._, 2017), GCNII (Chen _et al._, 2020), H\({}_{2}\)GCN (Zhu _et al._, 2020), and PTDNet (Luo _et al._, 2021). (3) GNNs with signed propagation: GPRGNN (Chien _et al._, 2020), FAGCN (Bo _et al._, 2021), and GGCN (Yan _et al._, 2021). _Details of implementation and baselines are in App. E._ ### Experimental Results (Q1) In Table 2, we describe the node classification accuracy of each method. A symbol (\(\ddagger\)) means that calibration is supplemented to the basic method. Now, let us analyze the results from two perspectives. **Homophily ratio plays an important role in GNNs.** Three citation networks have higher homophily compared to others. We can see that all methods perform well under homophilic datasets. As homophily decreases, methods that adjust weights depending on associativity outperform plain GNNs. Similarly, using signed messages (GPRGNN, FAGCN, and GGCN) has shown to be effective here. They achieve notable performance for both homophilic and heterophilic datasets, which means the separation of ego and neighbors (H\({}_{2}\)GCN) is quite important. **Calibration improves the overall quality and alleviates uncertainty.** We apply calibration (\(\ddagger\)) to signed GNNs (GPRGNN, FAGCN, and GGCN). We also apply calibration to GCN and GAT. The average improvements of three signed GNNs by calibration are 4.37%, 3.1%, and 3.13%, respectively. The improvements are greater than those of GCN(r) (2.65%) and GAT(r) (1.97%). Additionally, we describe the dissonance (Eq. 21) of each method in a bracket, where the calibrated methods show lower values than the corresponding \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline Datasets & Cora & Citeseer & Pubmed & Actor & Chameleon & Squirrel \\ Hom. (Eq. 1) & 0.81 & 0.74 & 0.8 & 0.22 & 0.23 & 0.22 & \\ \hline GCN & 79.0 \(\pm\)0.6\% (0.17) & 67.5 \(\pm\)0.8\% (0.29) & 77.6 \(\pm\)0.2\% (0.53) & 20.2 \(\pm\)0.4\% (0.29) & 49.3 \(\pm\)0.5\% (0.19) & 31.7 \(\pm\)0.7\% (0.31) \\ **GCN\({}^{\ddagger}\)** & 81.0 \(\pm\)0.9\% (0.12) & 71.3 \(\pm\)1.2\% (0.14) & 77.8 \(\pm\)0.4\% (0.38) & 21.7 \(\pm\)0.6\% (0.62) & 49.4 \(\pm\)0.6\% (0.25) & 31.5 \(\pm\)0.6\% (0.58) \\ APPNP & 81.3 \(\pm\)0.5\% (0.15) & 68.9 \(\pm\)0.3\% (0.21) & 79.0 \(\pm\)0.3\% (0.42) & 23.8 \(\pm\)0.3\% (0.49) & 48.0 \(\pm\)0.7\% (0.34) & 30.4 \(\pm\)0.6\% (0.69) \\ \hline GAT & 80.1 \(\pm\)0.6\% (0.22) & 68.0 \(\pm\)0.7\% (0.25) & 78.0 \(\pm\)0.4\% (0.45) & 22.5 \(\pm\)0.3\% (0.28) & 47.9 \(\pm\)0.8\% (0.17) & 30.8 \(\pm\)0.9\% (0.27) \\ **GAT\({}^{\ddagger}\)** & 81.4 \(\pm\)0.4\% (0.12) & 72.2 \(\pm\)0.6\% (0.08) & 78.3 \(\pm\)0.3\% (0.39) & 23.2 \(\pm\)1.8\% (0.43) & 49.2 \(\pm\)0.4\% (0.16) & 30.3 \(\pm\)0.8\% (0.40) \\ GCNII & 81.1 \(\pm\)0.7\% (0.08) & 68.5 \(\pm\)1.4\% (0.13) & 78.5 \(\pm\)0.4\% (0.20) & 25.9 \(\pm\)1.2\% (0.43) & 48.1 \(\pm\)0.7\% (0.16) & 29.1 \(\pm\)0.9\% (0.24) \\ H\({}_{2}\)GCN & 80.6 \(\pm\)0.6\% (0.16) & 68.2 \(\pm\)0.7\% (0.22) & 78.5 \(\pm\)0.3\% (0.29) & 25.6 \(\pm\)1.0\% (0.34) & 47.3 \(\pm\)0.8\% (0.19) & 31.3 \(\pm\)0.7\% (0.62) \\ PTDNet & 81.2 \(\pm\)0.6\% (0.24) & 69.5 \(\pm\)0.8\% (0.42) & 78.8 \(\pm\)0.5\% (0.44) & 21.5 \(\pm\)0.6\% (0.33) & 50.8 \(\pm\)0.9\% (0.17) & 32.1 \(\pm\)0.7\% (0.34) \\ \hline GPRGNN\({}^{\ddagger}\) & 82.2 \(\pm\)0.4\% (0.25) & 70.4 \(\pm\)0.8\% (0.43) & 79.1 \(\pm\)0.1\% (0.26) & 25.4 \(\pm\)0.2\% (0.55) & 49.8 \(\pm\)0.7\% (0.25) & 30.5 \(\pm\)0.6\% (0.36) \\ **GPRGNN\({}^{\ddagger}\)** & 84.8 \(\pm\)0.2\% (0.05) & 73.3 \(\pm\)0.5\% (0.06) & 79.9 \(\pm\)0.2\% (0.14) & 27.7 \(\pm\)1.3\% (0.041) & 50.3 \(\pm\)0.3\% (0.20) & 31.0 \(\pm\)0.4\% (0.17) \\ FAGCN & 80.9 \(\pm\)0.5\% (0.15) & 68.8 \(\pm\)0.6\% (0.17) & 79.0 \(\pm\)0.5\% (0.31) & 25.2 \(\pm\)0.8\% (0.66) & 46.5 \(\pm\)1.1\% (0.25) & 30.4 \(\pm\)0.4\% (0.64) \\ **FAGCN\({}^{\ddagger}\)** & 83.5 \(\pm\)0.4\% (0.10) & 73.4 \(\pm\)0.5\% (0.08) & 79.7 \(\pm\)0.2\% (0.20) & 27.6 \(\pm\)0.5\% (0.51) & 48.6 \(\pm\)0.7\% (0.20) & 31.3 \(\pm\)0.5\% (0.52) \\ GGCN & 80.0 \(\pm\)1.2\% (0.38) & 68.7 \(\pm\)1.6\% (0.30) & 78.2 \(\pm\)0.4\% (0.47) & 23.0 \(\pm\)0.5\% (0.47) & 48.5 \(\pm\)0.7\% (0.15) & 30.2 \(\pm\)0.7\% (0.40) \\ **GGCN\({}^{\ddagger}\)** & 82.7 \(\pm\)0.8\% (0.07) & 72.2 \(\pm\)0.4\% (0.05) & 78.7 \(\pm\)0.3\% (0.35) & 24.1 \(\pm\)0.4\% (0.31) & 50.3 \(\pm\)0.4\% (0.08) & 30.8 \(\pm\)0.6\% (0.17) \\ \hline \hline \end{tabular} \end{table} Table 2: Mean node classification accuracy (%) with standard deviation on six datasets. A shadowed grid indicates the best performance. Values in bracket stand for the dissonance defined in Eq. 21 and symbol \(\ddagger\) means that calibration is applied to baseline method Figure 4: Comparison of the dissonance on three graph variants; vanilla GCN, signed GCN, and zero-weight GCN (Q2) vanilla model. To summarize, the results indicate that calibration not only contributes to reducing uncertainty but also improves the accuracy of signed GNNs significantly. ### Correlation of using signed messages and the uncertainty (Q2) To show that signed messages increase uncertainty, we assume three types of graphs for GCN [13] using four datasets. Specifically, we fabricate two graph variants, signed GCN and zero-weight GCN. Here, we remove the randomness for a fair comparison. The results are illustrated in Fig. 4, where the x-axis is the number of layers and the y-axis represents dissonance. Referring to Thm. 4.8, the uncertainty is higher on signed GCN for all shallow layers. As we stack more layers, the entropy of vanilla GCN increases dramatically on heterophilous datasets, the Chameleon and Squirrel. In other words, plain GCN fails to discriminate the ego and neighbors (over-smoothing) and yields low classification accuracy. ### Case Study (Q3) Theoretical analyses confirm that signed messages increase the uncertainty in multi-class graphs (SS 4.3). They have shown to be effective when \(k^{\prime}\) gets closer to \(-k\) (Eq. 18), but this probability is inversely proportional to the number of classes \(c\). To further analyze this phenomenon, we compare the dissonance of two variants of GCN (signed GCN and zero-weight GCN) by decrement of the number of classes (\(c\)). Specifically, if the original data contains seven classes (e.g., Cora), we remove all the nodes that belong to the rightmost class to generate a graph with six labels. The results are illustrated in Fig. 5. As can be seen, zero-weight GCN (red) tends to have lower dissonance under multiple classes. However, under binary classes (\(c\)=2), signed GCN (blue) shows lower uncertainty with the aid of ego-neighbor separation. In the binary case, zero-weight GCN only utilizes homophilous neighbors and fails to generalize under this condition. ### Hyper-parameter Analysis (Q4) We conduct an experiment to investigate the effect of hyper-parameter \(\lambda\) (Eq. 24) that controls the impact of calibration. We tune the \(\lambda\) from 0 to 1 and choose the one with the best validation score. In Figure 6, we describe the node classification accuracy on four benchmark datasets. The blue line represents calibrated GCN, while others are signed GNNs with calibration. Firstly, we notice that GPRCNN\({}^{\ddagger}\) achieves the best performance on Cora, Citeseer, and Chameleon datasets. The performance gain in signed GNNs is restricted by the original node classification accuracy. For example, in the Squirrel dataset, GCN outperforms all baselines. Proper calibration helps to improve the performances but is limited by the inherent low capability of base models in heterophilous graphs. Further, assigning the same weights to \(\mathcal{L}_{GNN}\) and \(\mathcal{L}_{calib}\) generally downgrades the overall performance, which necessitates careful assignment of \(\lambda\) to achieve the optimal results for each dataset. ## 6 Conclusion In this work, we provide a new theoretical perspective on using signed messages for node embedding under multi-class benchmark datasets. Firstly, we show that signed messages contribute to the separation of heterophilous neighbors in a binary class, which is consistent with conventional studies. Then, we extend previous theorems to a multi-class scenario and point out two critical limitations of using signed propagation: (1) it may incur inconsistency between nodes, and (2) increases the probability of generating conflict evidence. Based on the observations, we employ calibration for signed GNNs to reduce uncertainty and enhance the quality. Through experimental analysis, we show that our method is beneficial for both homophilic and heterophilic graphs. We claim that our theorems can provide insights to develop a better aggregation scheme for future GNN studies. Figure 5: By differentiating the number of classes, we compare the dissonance of GCN using two graph variants (Q3) Figure 6: The effect of hyper-parameter \(\lambda\) in Eq. 24 on the classification accuracy of four calibrated methods (Q4)
2306.11236
Constraining the Woods-Saxon potential in fusion reactions based on the neural network
The accurate determination of the nuclear interaction potential is essential for predicting the fusion cross sections and understanding the reaction mechanism, which plays an important role in the synthesis of superheavy elements. In this work, the neural network, which combines with the calculations of the fusion cross sections via the Hill-Wheeler formula, is developed to optimize the parameters of the Woods-Saxon potential by comparing the experimental values. The correlations between the parameters of Woods-Saxon potential and the reaction partners, which can be quantitatively fitted to a sigmoid-like function with the mass numbers, have been displayed manifestly for the first time. This study could promote the accurate estimation of nucleus-nucleus interaction potential in low energy heavy-ion collisions.
Zepeng Gao, Siyu Liu, Peiwei Wen, Zehong Liao, Yu Yang, Jun Su, Yongjia Wang, Long Zhu
2023-06-20T02:02:43Z
http://arxiv.org/abs/2306.11236v2
Constraining the Woods-Saxon potential in fusion reactions based on a physics-informed neural network ###### Abstract The accurate determination of the nuclear interaction potential is essential for predicting the fusion cross sections and understanding the reaction mechanism, which plays an important role in the synthesis of superheavy elements. In this letter, a Physics-Informed Neural Network (PINN) method, which combines the neural networks with the calculations of the fusion cross sections via the Hill-Wheeler formula, is developed to optimize the parameters of the Woods-Saxon potential by comparing the experimental values. The correlations between the parameters of Woods-Saxon potential and the reaction partners, which can be quantitatively fitted to a sigmoid-like function with the mass numbers, have been displayed manifestly for the first time. This study could promote the accurate estimation of nucleus-nucleus interaction potential in low energy heavy-ion collisions. + Footnote †: journal: ## 1 Introduction In recent decades, the study of heavy-ion fusion reactions has garnered growing interest in the field of nuclear physics due to their importance for extending the period table of elements, as well as the understanding of the interplay between nuclear structure and the reaction dynamics [1; 2; 3; 4; 5]. However, the complexity of these reactions presents significant challenges for both experimental and theoretical investigations. One of the key challenges is accurately determining the nuclear interaction potential, which is crucial for investigating the reaction mechanism, but remains an arduous task due to several factors, including the quantum many-body problem, the form of nuclear force, among other complex influences [6; 7; 8; 9]. As one of most successful phenomenological forms of nuclear potential, the Woods-Saxon potential has been widely used to describe the nuclear interaction in heavy-ion fusion reactions, especially for the synthesis of the superheavy nuclei. However, the accuracy is limited by the uncertainties in the parameters. Much effort has been made to constrain the parameters in recent years [10; 11; 12; 13]. Nevertheless, due to a complex function with multiple and correlated parameters as well as existing computational limitations, the majority of studies primarily apply constraints on the parameters for specific systems, such as focusing solely on \({}^{12}\)C or \({}^{16}\)O induced reactions [14; 15]. Therefore, it is imperative to accurately and efficiently constrain and optimize the Woods-Saxon potential parameters from a global perspective. Machine learning, being adept at uncovering underlying patterns from vast amounts of data, as well as accurately fitting and predicting data, have garnered significant interest across various fields. In the realm of nuclear physics, they hold immense potential for addressing challenges in terms of nuclear theories and experiments [16; 17; 18; 19]. For example, machine learning methods have demonstrated successful applications in predicting nuclear masses [20; 21], charge radii [22; 23], half-lives [24; 25], nuclear energy density functional [26], fission product yields [27; 28], as well as facilitating studies on nuclear reactions [29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39]. In recent years, although great progress has been made in applying machine learning methods to address issues in nuclear physics, it is worth to note that most of those studies are data-driven approaches. Data in a real process are governed by physical laws, thus, it is imperative to integrate the fundamental principles of physics into machine learning methods, enabling them to be guided by the laws of physics rather than relying solely on data-driven approaches. In fact, a physics-informed neural network (PINN) model was proposed by M. Raissi _et al._, which has been utilized for solving partial differential equations (PDEs) [40]. PINN is an innovative neural network model that combines physical principles with traditional data-driven approaches. By incorporating physical information as computational constraints, PINNs enhance the accuracy and robustness of predictions made by purely data-driven neural networks. It has also effectively tackled several crucial problems in nuclear physics. For nuclear masses, M. R. Mumpower _et al._ integrated the constraint of the Garvey-Kelson relations existing in the nuclear binding energy into the machine learning model, resulting in remarkably accurate predictions without reference to any underlying theoretical model [41]. Furthermore, F. P. Li _et al_ used PINN to successfully reproduce the equation of state of a strongly-interacting quantum chromodynamics [42]. In Ref. [43], theoretical models of bottomonium productions in high-energy nuclear collisions was also characterized by using PINN. In general, PINN are capable of achieving objectives that are beyond the reach of conventional neural networks. In this letter, we will explore the capability of the PINN to constrain the parameters of the Woods-Saxon potential model by comparing the calculations of fusion cross sections using the Hill-Wheeler formula with the corresponding experimental values. Subsequently, we will explicitly demonstrate the correlations between the parameters and colliding partners. The outcomes of this investigation will be highly significant in enhancing our comprehension of heavy ion fusion reactions and will offer a fresh perspective for exploring nuclear physics co-nundrums. ## 2 Theoretical method ### Fusion cross section and Woods-Saxon potential At a given center-of-mass energy, the fusion cross section \(\sigma_{\text{fusion}}\left(E_{\text{c.m.}}\right)\) can be expressed as the sum of the cross sections at each partial wave \(J\): \[\sigma_{\text{fusion}}\left(E_{\text{c.m.}}\right)=\frac{\pi\hbar^{2}}{2\mu E_{ \text{c.m.}}}\sum\limits_{J=0}^{J_{\text{thresh}}}(2J+1)T\left(E_{\text{c.m.}} \right.,J)\,, \tag{1}\] where \(\mu\) is the reduced mass, \(T\) is the penetration probability and \(J\) is the incident angular momentum. Note that we have specifically selected relatively light combinations of target and projectile, where the occurrence of quasi-fission is relatively minimal. The Hill-Wheeler formula [44] is a well-known analytical expression for the penetration probability: \[\begin{split}& T_{\text{HW}}\left(E_{\text{c.m.}},J\right)\\ &=\left\{1+\exp\left[\frac{2\pi}{\hbar\omega(J)}\left(\frac{ \hbar^{2}J(J+1)}{2\mu R_{\text{B}}^{2}(J)}+B-E_{\text{c.m.}}\right)\right] \right\}^{-1},\end{split} \tag{2}\] where \(R_{\text{B}}\) and \(\hbar\omega(J)\) correspond to the position and curvature of the barrier under the \(J^{\text{th}}\) partial wave, respectively. The barrier curvature \(\hbar\omega(J)\) can be calculated using the following formula: \[\hbar\omega(J)=\left.\sqrt{-\frac{\hbar^{2}}{\mu}\frac{\partial^{2}}{\partial R ^{2}}V(R,J)}\right|_{R=R_{\text{B}}(J)}. \tag{3}\] The interaction potential \(V(R,J)\) consists of a long-range Coulomb potential, a short-range nuclear potential, and a centrifugal potential, which can be expressed as follows [45]: \[V(R,J)=V_{\text{C}}(R)+V_{\text{N}}(R)+V_{\text{R}}(R,J). \tag{4}\] The fusion cross sections at the sub-barrier, especially the deep sub-barrier region strongly depends on the coupling of the nuclei structures. We would like to emphasize that in order to minimize the uncertainties and focus on the parameters of Woods-Saxon potential, the fusion reactions occurring above the barrier are only investigated. The Coulomb potential and centrifugal potential can be expressed by the forms: \[\begin{split}& V_{\text{C}}\left(R\right)=\frac{Z_{1}Z_{2}e^{2}}{R}, \\ & V_{\text{R}}\left(R,J\right)=\frac{\hbar^{2}J(J+1)}{2\mu R^{2}}. \end{split} \tag{5}\] The Woods-Saxon nuclear potential can be expressed as [46]: \[V_{\text{N}}\left(R\right)=\frac{-V_{0}}{1+\exp\left[\left(R-R_{\text{P}}-R_{ \text{T}}\right)/a\right]}, \tag{6}\] with \[R_{i}=r_{0}A_{i}^{1/3},i=\text{P},\text{T}, \tag{7}\] where the depth \(V_{0}\), the radius parameter \(r_{0}\), and the diffuseness parameter \(a\) are the major parameters of the Woods-Saxon potential. By changing the parameters of the Woods-Saxon potential, we can modify the position, height as well as curvature of the potential barrier, which ultimately affects the prediction of the fusion cross sections. Therefore, we can evaluate the optimization results of the parameters by comparing the predicted cross sections via PINN with experimental values. ### The physics-informed neural network PINN primarily involves grafting a physical constraint onto the output of a neural network. In this letter, we perform calculations of fusion cross sections and train the neural network by comparing it with experimental data, as illustrated in Fig.1. Initially, \(Z\) (proton number) and \(A\) (mass number) of the projectile and target nuclei as the input features are fed into a multi-output neural network, which then generates three parameters corresponding to the Woods-Saxon potential. Moreover, the nuclear interaction potential at various angular momenta and the associated input parameters required for the Hill-Wheeler formula are obtained. By integrating the capture probability over all angular momenta, the corresponding fusion cross sections can be derived. By comparing the fusion cross-section calculated using Hill-Wheeler formula with parameters \(V_{0}\),\(r_{0}\) and \(a\) obtained from PINN to the experimental values, we can establish a loss function, as expressed below: \[\mathcal{L}(i)=\left\{\sum_{e}\left[\lg\left(\sigma_{\text{NN}}^{(i)}(e) \right)-\lg\left(\sigma_{\text{exp}}^{(i)}(e)\right)\right]\right\}^{2}, \tag{8}\] where \(e\) represents the \(E_{\text{c.m.}}\) associated with the experimental value in the \(i\)th projectile-target combination. It is important to note that for each set of parameters of Woods-Saxon potential, a fusion excitation function is calculated and compared with all experimental values associated with the projectile-target combination. Consequently, the resulting loss is not solely determined by each individual energy, but rather by considering the combined outcomes of all the \(E_{\text{c.m.}}\) with the specific system. Obtaining the loss allows for the training of the multi-output neural network, which will be discussed in the subsequent context. Foremost, it is crucial to clarify that \(\mathcal{L}(i)\) is not attributed to the parameters of Woods-Saxon potential, but rather to the fusion cross sections. Moreover, there is no explicit functional association between the fusion cross sections and the parameters of Woods-Saxon potential, so that it is not feasible to directly obtain the loss of three parameters using Backpropagation algorithm. However, the qualitative association can be analyzed and the existence of a positive correlation between them has been established in advance. Therefore, loss of the parameters of Woods-Saxon potential to the loss of fusion cross sections and assign suitable weight coefficients in a straightforward manner, as expressed below: \[\mathcal{L}(i)^{(j)}=\alpha^{(j)}\mathcal{L}(i),j=V_{0},r_{0},a. \tag{9}\] The weight coefficients, by taking into account the empirical ranges, were assigned as 1, 0.001, and 0.01 for \(V_{0}\), \(r_{0}\) and \(a\), respectively. It is worth to emphasize that the weight coefficients weakly influence the behavior of correlations found in this work. After obtaining the loss of the parameters, the Backpropagation algorithm can be employed for training the multi-output neural network. ### Parameter setting of the multi-output neural network The multi-output neural network consists of three neural networks, wherein each network comprises three hidden layers with 32, 64, and 128 neurons, respectively. "Tanh" and "Adam" were selected as the activation function and optimizer. The learning rate \(\alpha\) started at \(10^{-4}\) and gradually decreases to \(10^{-6}\) during the training process. Furthermore, a total of 343 reactions from the online dataset [47] were chosen for this study, where \(Z_{1}Z_{2}\leq 1600\) and there were at least three experimental data points above the Coulomb barrier. To select the data, the Coulomb barrier can be estimated by the latest fitting formula [48] as: \[V_{\text{B}}=\frac{Z_{\text{P}}Z_{\text{T}}e^{2}}{0.9782\left(A_{\text{P}}^{1/ 3}+A_{\text{T}}^{1/3}\right)+4.2833}. \tag{10}\] 30 projectile-target combinations of the dataset were randomly selected as the testing set and validation set, respectively. Thus, the remaining 283 combinations were utilized as the training set and batch size was set to 70 during training. ### Training the neural network The loss curves of training set, validation set and testing set, by training on 283 combinations through multiple optimizations and structural adjustments, are displayed in Fig. 2. The loss function can be defined as the average value of the individual loss function \(\mathcal{L}(i)\) computed for each projectile-target combination: \[\text{Loss}=\frac{1}{N}\sum_{i}^{N}\mathcal{L}(i) \tag{11}\] The loss for the validation and testing sets initially decreases and then plateaus or slightly increases, whereas the loss for the training set continuously decreases with the increase of epoch. Hence, a truncation, to prevent overfitting and to obtain the extrapolation ability in the neural network, was employed at epoch = 500. Furthermore, one can find that the loss for the training set is higher than that for the validation set, it does not necessarily imply underfitting, since the loss on the validation set has already plateaued. This discrepancy is mainly due to Figure 1: The architecture of PINN used in this work. It combines neural networks and process of the calculation of fusion cross-sections via Hill-Wheeler formula. Figure 2: The loss curve for training (solid line), validation (dash line) and testing set (dotted line) as functions of the epoch (training time). the impact of data sampling on the training and validation sets. However, this effect is not significant for subsequent correlation analysis. The extrapolation ability of the PINN can be demonstrated by its predictions on fusion cross section in the testing set, which has not been through the training process. The predicted cross section from PINN have been compared with the experimental data, as shown in Fig. 3. One can see that the extrapolation capability of the model is reliable, as the predicted cross section values are in good agreement with the experimental data for most of the reaction systems. However, in light projectile-target combination, there are some discrepancies between theoretical predictions and experimental data. This is primarily due to the strong structural effects that exist in such systems, which can lead to the possibility of cluster formation. ## 3 Results and discussion Our aim is to explore the underlying correlations of the parameters of Woods-Saxon potential with the reaction systems. We show the parameters of Woods-Saxon potential constrained by the PINN in Fig. 4. The scatter plots for the parameters of mass number, proton number as well as Coulomb parameter (\(\chi=Z_{1}Z_{2}/(A_{1}^{1/3}+A_{2}^{1/3})\)) are shown. The strong correlations indicate that the Woods-Saxon potential parameters could be constrained and characterized by these quantities. It can be seen that all of these parameters fall within a reasonable range. The radius parameter \(r_{0}\) displays a rapid increase from 1.14 fm to approximately 1.18 fm, followed by a slower convergence to around 1.19 fm as the mass number \(A\) and proton number \(Z\) increase. Additionally, the depth parameter \(V_{0}\) exhibits a discernable trend of increasing with increasing \(A\), \(Z\), and \(\chi\), with values falling within the range of roughly 77-83 MeV. The surface diffuseness parameter \(a\) shows an even more pronounced correlation, with a rapid increase followed by a slower rise as \(A\), \(Z\), and \(\chi\) increase, within a range of approximately 0.4-0.8 fm. Note that the distributions of \(V_{0}\) and \(a\) exhibit a certain degree of diffuseness, primarily because these two parameters are shared by the entire combination and are not solely determined by either the projectile or the target nucleus. The Coulomb parameter is a measure of the direct correlation between the projectile-target combination, and there exists a strong linear correlation with the Coulomb barrier [48; 49], which directly affects the fusion cross sections and makes it highly relevant to the constrained \(V_{0}\) and \(a\). The above trends and ranges between the three parameters of the Woods-Saxon potential and \(A\), \(Z\), and \(\chi\) are considered reasonably in physics [10; 15; 50]. The Woods-Saxon potential describes the nuclear force potential in the nucleus-nucleus interaction, and the nuclear force is a short-range strong interaction attraction. Among them, \(V_{0}\) controls the strength of the nuclear force, which increases with \(A\) in the region of light partner. However, the correlation between \(V_{0}\) and heavy collision partner is weak. This is because for large \(A\), the value of potential is approximately flat in the center and the variation is weak. Currently, the available experimental data are mostly in the reactions with stable projectiles and targets. Due to the curvature of the \(\beta\) stable line, the spread of neutron skin is shown with increasing isospin. Consequently, the trend that the parameters Figure 3: The experimental and predicted fusion excitation functions for testing set. The solid lines denote the predicted funsion cross sections from PINN, while the dots denote experimental data. The parameters constrained by PINN are also displayed. \(a\) and \(r_{0}\) increase with increasing mass and charge number for both light and heavy partners can be seen. Based on the qualitative constraints mentioned above, we expected to obtain a more specific, intuitive, and analytical quantitative expression. As a result, the strongly correlated parameters \(a\) and \(r_{0}\) were fitted with the mass numbers of projectile-target combinations using a sigmoid-like function, as shown in Fig. 5. The good performance of fitting using a sigmoid-like function also can be easily observed in upper panel of Fig. 5 where the predicted \(a\) is plotted versus the fitting results. The fitting results are better in regions where \(a\) is larger, which is consistent with the broadening of \(a\) observed in Fig. 4. The fitting accuracy for \(r_{0}\) is higher in lower panel of Fig. 5, mainly due to \(r_{0}\) depends only on the individual projectile or target nucleus instead of the combination. The figure displays the fitted parameters and coefficient of determination (\(R^{2}\)), which provide an indication of the goodness-of-fit. The coefficient of determination for both parameters is greater than 0.96, which suggests that the fitting is highly effective. The final quantitative constraints on \(a\) and \(r_{0}\) can be expressed as follows: \[a=\frac{\mathrm{c_{1}}}{\mathrm{c_{2}+e^{\mathrm{c_{3}A_{1}}}+e^{\mathrm{c_{4 }A_{2}}}}}, \tag{12}\] where, \(\mathrm{c_{1}}=1.0259\) fm, \(\mathrm{c_{2}}=1.3603\), \(\mathrm{c_{3}}=-0.0877\), and \(\mathrm{c_{4}}=-0.0352\). \[r_{0}=\frac{\mathrm{d_{1}}}{\mathrm{d_{2}+e^{\mathrm{d_{3}A}}}}, \tag{13}\] where, \(\mathrm{d_{1}}=19.7344\) fm, \(\mathrm{d_{2}}=16.5655\), and \(\mathrm{d_{3}}=-0.0343\). \(A_{1}\), \(A_{2}\), and \(A\) denote the mass numbers of light partner, heavy partner and total the whole system, respectively. ## 4 Summary We have successfully incorporated the calculations of the fusion cross sections via the Hill-Wheeler formula into a neural network and constructed a physics-informed neural network framework to optimize the parameters of the Woods-Saxon potential in a more accurate and efficient way. Upon comparison with experimental fusion cross sections, the PINN demonstrated impressive predictive performance on the testing set, and effectively constrained the three parameters of the Woods-Saxon potential. Furthermore, the constrained values of \(a\) and \(r_{0}\) exhibit a strong correlation with the projectile-target combinations, which are accurately fitted by a sigmoid-like function for the first time. This function can be conveniently used to model heavy-ion fusion reactions. Moreover, PINN methods are expected to become the mainstream direction in nuclear physics. Compared with traditional data-driven methods, PINN is more efficient, more accurate, and can introduce prior knowledge into physical models to improve the interpretability of the model. **Declaration of competing interest** The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. **Acknowledgments** This work was supported by the National Natural Science Foundation of China under Grant No. 12075327; Fundamental Research Funds for the Central Universities, Sun Yat-sen University under Grant No. 23lgbj003; The Open Project of Guangxi Key Laboratory of Nuclear Physics and Nuclear Figure 4: Scatter plots display the correlation between the horizontal and vertical labeled quantities. Each scatter point denotes one projectile-target combination. Figure 5: Upper panel: the predicted \(a\) from PINN versus the fitted \(a\). Each scatter point denotes one projectile-target combination. Lower panel: the predicted \(r_{0}\) from PINN (dots) and the fitted \(r_{0}\) (solid line) as the functions of mass number. Each scatter point denotes a projectile or target nucleus. The fitting formula and corresponding coefficients, as well as the coefficient of determination, are also displayed. Technology under Grant No. NLK2022-01; Guangdong Major Project of Basic and Applied Basic Research under Grant No. 2021B0301030006; Central Government Guidance Funds for Local Scientific and Technological Development, China (No. Guike ZY22096024).
2305.06568
Convolutional Neural Networks Rarely Learn Shape for Semantic Segmentation
Shape learning, or the ability to leverage shape information, could be a desirable property of convolutional neural networks (CNNs) when target objects have specific shapes. While some research on the topic is emerging, there is no systematic study to conclusively determine whether and under what circumstances CNNs learn shape. Here, we present such a study in the context of segmentation networks where shapes are particularly important. We define shape and propose a new behavioral metric to measure the extent to which a CNN utilizes shape information. We then execute a set of experiments with synthetic and real-world data to progressively uncover under which circumstances CNNs learn shape and what can be done to encourage such behavior. We conclude that (i) CNNs do not learn shape in typical settings but rather rely on other features available to identify the objects of interest, (ii) CNNs can learn shape, but only if the shape is the only feature available to identify the object, (iii) sufficiently large receptive field size relative to the size of target objects is necessary for shape learning; (iv) a limited set of augmentations can encourage shape learning; (v) learning shape is indeed useful in the presence of out-of-distribution data.
Yixin Zhang, Maciej A. Mazurowski
2023-05-11T05:02:11Z
http://arxiv.org/abs/2305.06568v3
# Convolutional Neural Networks Rarely Learn Shape for Semantic Segmentation ###### Abstract Shape learning, or the ability to leverage shape information, could be a desirable property of convolutional neural networks (CNNs) when target objects have specific shapes. While some research on the topic is emerging, there is no systematic study to conclusively determine whether and under what circumstances CNNs learn shape. Here, we present such a study in the context of segmentation networks where shapes are particularly important. We define shape and propose a new behavioral metric to measure the extent to which a CNN utilizes shape information. We then execute a set of experiments with synthetic and real-world data to progressively uncover under which circumstances CNNs learn shape and what can be done to encourage such behavior. We conclude that (i) CNNs do not learn shape in typical settings but rather rely on other features available to identify the objects of interest, (ii) CNNs can learn shape, but only if the shape is the only feature available to identify the object, (iii) sufficiently large receptive field size relative to the size of target objects is necessary for shape learning; (iv) a limited set of augmentations can encourage shape learning; (v) learning shape is indeed useful in the presence of out-of-distribution data. Segmentation Feature Measurement Machine Learning Computer Vision Since the breakthrough made by AlexNet on ImageNet [19], CNNs have achieved promising results on multiple image classification, segmentation, and object detection tasks [9, 11, 29, 30, 32]. Early research attributed CNNs' successes to the similarity between CNNs and a human's visual system. It was presumed that CNNs learn simple low-level features (i.e., edges) in shallow layers and combine them to form complex global shapes until objects can be recognized [18, 23]. Recent data suggest that CNNs utilize only limited shape information in a typical setting[7, 13, 14]. However, a systematic study of the issue is missing. If CNNs indeed do not learn shape, this property of CNNs is likely to lead to their limited ability to generalize in settings where noise distribution, brightness, or other appearance change. Learning shape by CNNs could be of benefit. If we encourage CNNs to use shape information, they may learn and utilize a feature set more robust against common appearance disparities between training and testing images[26]. An example of an application area where this might be of particular benefit is medical. Magnetic resonance imaging (MRI) and computed tomography (CT) images maintain the appearance of shapes for a given patient because they are typically filmed at a fixed position relative to body parts with similar 3D geometry. However, each image's texture, noise distribution, and intensity/color can highly vary depending on a device's hardware specifications and imaging parameters. CNNs that learn shape could rely less on inconsistent characteristics but focus on the more invariant object shape. Shape-learning CNNs may also be more robust against adversarial attacks. Recent research perceives adversarial vulnerability as a property of the data set rather than the network. Specifically, adversarial vulnerability arises from the presence of highly predictive but non-robust features uninterpretable to humans in the training set [16]. If we consider adversarial vulnerability arising from uninterpretable features, CNNs trained to utilize human-interpretable shape features may show higher robustness against adversarial attacks. A systematic study on data factors that affect the shape learning of CNNs and the circumstances under which shapes can reliably be learned in real-life scenarios is lacking. However, there are some studies that while offering limited general interpretation of their results, provide some information on specific aspects of the question. Kubilius et al. [20] created a validation set entirely of object silhouettes. They found that CNNs still showed predictive capability on the silhouettes, although their performance dropped by 30%. Additionally, they found that objects in the same super-ordinate category have similar representations but offered little interpretation of this observation. Baker et al. [1, 2] show that texture plays a more significant role than global shape in ImageNet-trained CNN's perception of objects, where CNNs barely learn the abstract relations of elements. The authors made no further comments on the cause of this phenomenon. Geirhos et al. [7] also showed that ImageNet-trained CNNs rely heavily on textures for classification, instead of using a uses a conflict-cue method. The authors suggest the potential of applying neural style transfer (NST) as an augmentation to encourage shape learning for better testing accuracy and OOD robustness. Ritter et al.[31] demonstrated that Inception and Matching Nets[35] pre-trained on ImageNet tend to generate latent representations with higher similarity for objects with similar shapes than those with similar colors. However, it would be premature to conclude CNNs are shape-learning solely through their insensitivity to colors. Hermann et al.[13, 14] show that CNNs with the same model architecture may have vastly different shape learning behaviors for different training datasets and objectives. Their configurations have direct impacts on the feature extraction process of the SN. Though innovative in its nature, their research ended with a conclusion that the "most decodable feature from untrained mode" will be learned, without a further account of what it practically means by "decodable". Hosseini et al.[15] state that proper augmentation, initialization, and the use of batch normalization are essential for networks to learn shape. Li et al. [24] combined training with stylized images, soft labeling, and weighted loss as a heuristic to gain more control over the inductive bias towards texture or shape during the model training. Their approach achieved improved performance on the ImageNet-C benchmark [12]. Islam et al. [17] proposed a dimensionality estimation metric and used the metric to assess a CNN's capacity to represent shape in latent space. Their study answers the question "whether a CNN is capable of learning shape" and "where are those representations stored in the CNN" but did not address which factors encourage shape learning in a general setting. While the prior work has limitations, we learn the following. First, neural networks might not learn shape as often as one intuitively expected. Second, whether the network learns shape may depend on various factors, but it is not entirely clear which factors those are. In this paper, we devise a progressively complex systematic experimental study to answer the general questions related to learning shape. We do so in the context of segmentation networks, where shape plays a particularly important role. Specifically, using synthetic datasets, we explore a model's ability to learn shapes by controlling data parameters, model receptive fields, and object target sizes. We also observe the behavioral changes of shape-learning networks when they encounter target objects with variations in brightness and shape. It allows us to better understand shape-learning SNs' responses to different OOD data. Then, we validate our conclusions on three collections of real datasets to ensure our conclusions apply outside of synthetic scenarios. Though we conduct experiments with the setting of UNet-like networks on segmentation tasks, our conclusions may also apply to image classification and object detection tasks. Segmentation is a special case of image classification (namely pixel-wise classification); it thus shares a similar workflow to that of classification. (3) For object detection, a fully convolutional network (a superclass of UNet) is a major building block in the feature extraction head adapted by models specialized for object detection such as Mask R-CNN and YOLO. The behavior of the fully convolutional network should resemble that of UNet. Our contributions are briefly summarized as follows: * We present the first systematic study on the role shape plays in the feature-learning of CNNs based on a clear definition of shape. * Toward this goal, we propose a metric that quantifies the behavior of a network to assess whether it considers shape during inference. * We comprehensively analyze the properties of shape-learning segmentation networks. * We propose and evaluate data augmentation techniques to promote learning of shape in neural networks. In the rest of this paper, we list our definitions relevant to shape learning and a metric to measure shape learning in Section 1. We then briefly introduce our data in Section 2. We describe our experimental design and results in Section 3. Finally, we conclude our findings in Section 4. ## 1 Definitions An _object_ in the context of images is a set of contiguous pixels. Three attributes of objects are of particular interest here: _shape, texture, and structure_. For an object \(\mathbf{O}\), **Definition 1**.: _"Shape" is the spatial relationship between all boundaries separating pixels in \(\mathbf{O}\) from pixels that are not._ The definition of _texture_ has been well-established in the literature.[10, 34]. We adopt it as: **Definition 2**.: _"Texture" is a distribution from which pixels forming similar visual impressions may be sampled._ Intuitively, each pixel can be associated with only one texture. We use the definition of texture to establish definitions of a _simple object_ and a _complex object_, which will be of importance in this study. **Definition 3**.: _A "Simple object" is a set containing contiguous pixels with the same texture \(\mathbf{A}\) bounded by pixels with texture \(\mathbf{B}\neq\mathbf{A}\) or by the edge of the image._ **Definition 4**.: _A "Complex object" is an object formed by grouping multiple simple objects following certain spatial relationships._ **Definition 5**.: _"Structure" is the spatial relationships between the simple object(s) in each complex object. Simple objects all have the same null structure._ Given these definitions of shape, texture, and structure, we can rigorously describe the meaning of "shape learning". Conceptually, a shape-learning network should recognize objects with desired shapes even if the other features (e.g., textures) of the objects change. On the other hand, they ignore objects with irrelevant shapes but desirable non-shape features. To quantify the extent to which a Network network's performance matches this behavior, we propose a metric named Shape Bias Index (SBI). This metric is named after the concept of Shape Bias in psychology [5, 22], which is referred to as a model's ability to recognize objects by their shape in computer vision. SBI is assessed empirically by isolating and comparing the effects of different discriminative features. Given a raw dataset, we split it to obtain a training set and a validation set such that the samples in the validation set contain a similar set of possible discriminative features to the training sets. We denote this validation set as \(D_{val}=[X,y]\). Based on \(D_{val}\), we then generate probing sets by removing certain discriminative features from \(D_{val}\) * \(D_{rm}=[X_{rm},\ y_{rm}]\) by altering one or multiple presented non-shape discriminative features (e.g., texture changes, noise, inserting uninterested objects...). The specific alternation of non-shape features may differ by experiment * \(D_{aff}=[X_{aff},y_{aff}]\) by rotating or flipping \([X,y]\) so that the output image contains objects with different shapes from before (represent affine transformation of shape). * \(D_{shuf}=[X_{shuf},y_{shuf}]\) by dividing \([X,y]\) into 16 \((4\times 4)\) equally sized patches and shuffle their positions; (represent non-affine transformation of shape). As shown in Fig. 1, with these four datasets, we can compute a model's SBI following the equations below. Let \(IOU(D_{foo})\) denote the segmentation performance of the analyzed model on some dataset \(D_{foo}\) (\(D_{foo}\) is a placeholder). The relative performance drop caused by the digression of non-shape discriminative features in \(D_{foo}\) from those in \(D_{foo}\) can be quantified as: \[PD_{foo}=1-\frac{IOU(D_{foo})}{IOU(D_{val})}\] Figure 1: Pipeline for (left) generating probing images and (right) computing SBI We may obtain \(PD_{rm},PD_{aff}\) and \(PD_{shuf}\) in this manners by plugging in the place of \(D_{foo}\) with \(D_{rm},D_{aff}\) and \(D_{shuf}\). Then, we smoothen their values through a Soft-Max function as: \[[\Delta_{rm},\Delta_{aff},\Delta_{shuf}]=SoftMax[PD_{rm},PD_{aff},PD_{shuf}]\] Our SBI (Shape Bias Index) for the model to be assessed can now be computed as \[SBI=\frac{\Delta_{aff}+\Delta_{shuf}}{\Delta_{rm}} \tag{1}\] If the performance drop caused by the changes to the shape (\(\Delta_{aff}\) and \(\Delta_{shuf}\)) are higher than that caused by changes in non-shape features (\(\Delta_{rm}\)), it reflects that the network is more sensitive to changes in shape than non-shape features. The value of SBI would then be larger than one and increases with higher sensitivity to shape changes and lower sensitivity to non-shape changes. The larger the SBI is, the more weight to shape information a network weighs in its inference process. Since the framework for computing SBI requires a task-specific validation set and has broad room for alternative implementation of probing sets, it is an unnormalized metric. Direct comparisons between SBIs for different models require the same validation set and probing sets. An explicitly defined SBI threshold may be helpful for a qualitative answer to "whether a specific network is shape-learning." ## 2 Data To explore and verify factors that affect the shape learning of segmentation networks, we conducted an extensive set of experiments on a combination of synthetic and real images. We first use synthetic data to explore how the presence of different discriminative features, the size of the model receptive field, and the size of target objects influence shape learning. We then demonstrate the response of shape-learning segmentation networks to an increasing intensity of domain shift (e.g., brightness changes) and shape variations in target objects. In our synthetic dataset, each image contains one or two polygons sized 100-150 pixels in their width and height as potential (target or non-target) objects. The objects can be simple or complex. Object(s) in each image is/are placed on some background with one random texture. The specific textures in each image, both for those associated with objects and backgrounds, are drawn from: 1. the Colored Brodatz database (112 textures) for the training set, validation set \(D_{val}\), and two of the three probing sets \(D_{aff}\) and \(D_{shuf}\). 2. the CURET database (61 textures) for images in probing set \(D_{rm}\) so that no seen textures would exist in (\(D_{rm}\)) After the experiments and analysis on our synthetic datasets, we further validate our findings on three of real datasets. The three datasets are: 1. BFGT: Breast and FGT MRI Dataset [4] 2. FISH: A Large-Scale Dataset for Fish Segmentation and Classification[33] 3. LUNG: OSIC Pulmonary Fibrosis Progression[21, 27] + COVID-19 CT scans[8, 25, 28] BFGT uses a subset of "Breast and FGT MRI Dataset"[4] containing slices parallel to axial planes of breast tomosynthesis. The masks for breast regions across different images are of similar shapes. FISH uses a subset of "A Large-Scale Dataset for Fish Segmentation and Classification" [33] containing four species of fish in two genera with their photos pasted on a wood or marble surface. Each fish is positioned with its longest dimension positioned horizontally, second longest dimension positioned vertically, and shortest dimension positioned perpendicular to the surface. Images and masks for LUNG are extracted from a combination of two different repositories. Specifically, "OSIC Pulmonary Fibrosis Progression" [21, 27] and "COVID-19 CT scans"[8, 25, 28]. Both repositories contain images extracted from the axial view of lung X-ray images. The shapes of lungs across different images resemble each other. We divided each dataset into four partitions: one as the training set, one as the validation set, one noisy dataset created by adding six different types of noise to validation sets independently, and one with naturally occurring OOD data (e.g., images of fish from a different species but from similar family). These three versions of the datasets will help approximate the IOU(\(D_{rm}\)) used for computing the SBIs for the respective models. ## 3 Experiments We conduct all our experiments on a UNet-like segmentation network. Unless otherwise specified, the implementation of our network is adapted from[3], with all BatchNorm layers using per-batch statistics. We use cross entropy as the objective function to train on every synthetic dataset, which consists of RGB images with dimension \(320\times 320\). With a combination of synthetic and real data, we performed a comprehensive set of experiments to answer the following questions in the setting of semantic segmentation: 1. Under what circumstances does a network learn shape? 2. Do object size and model receptive field size affect shape learning? 3. How do input perturbations affect the performance of shape-learning networks? 4. Can augmentations encourage shape-learning with existing data? Question 1 is the principal question that we thoroughly examine in this paper. We answer this question using synthetic data representing simple objects (section 3.1) and complex objects (section 3.2). Questions 2 and 3 are answered using synthetic data in Sections 3.3 and 3.4, respectively. Finally, question 4 investigates how our systematic findings can be applied to encourage shape-learning and improve generalization in real data (Section 3.5). ### Under what circumstances does a network learn shape? (simple object) We first answer the question on the capability of networks to learn simple objects' shapes. In order to do so, we consider four different scenarios and four corresponding simulated datasets. In these scenarios, different features will be available based on which segmentation of the target object can be conducted. The first scenario is a simple one, where each image contains two simple objects in the shape of polygons: one with a discriminative shape (i.e., a shape that can be used to determine that this is the object of interest) throughout the dataset (target object) and the other with random shape (non-target object). An example training image is shown in Figure 2. The presence of other discriminative features such as texture in this initial training set is deliberately avoided. To ensure the model cannot use textures to classify, each object's texture is randomly selected. In real-life scenarios, however, multiple discriminative features with similar predictiveness are usually present simultaneously (e.g., the shape of elephants and the color/texture of elephants' skin). To explore how shape-learning behaviors are affected by discriminative features co-occurring with shape, we design the following additional discriminative features through which target objects in an image can be identified: 1. Texture: The texture of each pixel associated with target objects is drawn from a set containing five selected textures. In other words, a pixel is associated with the target object if and only if it has one of the selected textures. Non-target objects and the background have disjoint sets of textures from target objects. 2. Singular: Instead of two polygons in each image, there is only one polygon of size 100-150 pixels in both dimensions, which is always the target object. Textures are randomly selected for both target and background unless co-occurred with texture. Based on this definition of features, the remaining three scenarios, and corresponding simulated datasets are as follows. The second scenario is the same as the first one with the exception that texture is also a discriminative feature as described above. The third scenario is the same as the first with the addition of the "singular" discriminative feature as described above. In the fourth scenario, both texture and singular features are discriminative. We train a segmentation network on each training set. The performance of those models on \(D_{val},D_{rm},D_{aff}\)\(and\)\(D_{shuf}\) (the probing datasets used to compute SBI), together with SBIs, is reported in Table 1. From the last column of Table 1, one can clearly see that only the model from the first scenario, where the shape is the only discriminative feature of the target object, was able to show considerable shape learning behavior (high SBI). Whenever another feature was available to aid the segmentation, the models did not show the tendency to learn shape (low SBI). Figure 2: Image of simple objects with only shape being discriminative Analysis of performance on the specific datasets (columns 2-5) allows for more detailed insights. First, the model trained on the dataset where shape was the only discriminative feature achieved high segmentation performance on the validation set \(D_{val}\). It demonstrates that a segmentation network can recognize shapes because objects' shapes are the only feature allowed for identifying the object of interest. Recall that \(D_{rm}\) perturbs non-shape features while \(D_{aff}\) and \(D_{shuf}\) perturbs target objects' shapes. This model experiences a lower performance drop when tested on \(D_{rm}\) than on \(D_{aff}\) and \(D_{shuf}\). The model's robustness against significant texture changes (also brightness changes and noise) and brittleness against shape changes aligns with our intuition about the properties of a shape-learning network. We hence conclude segmentation networks are capable of learning discriminative shape information in the target objects for simple objects. We then observe the behavior of models trained on datasets containing other discriminative features in addition to shape (i.e., shape+texture, shape+singular, and shape+texture+singular). The discriminative features in addition to shape unanimously lower the SBI of each model to a significant extent, despite their similar validation IOUs. This phenomenon indicates that shape-learning is not a default behavior of segmentation networks; rather, these findings suggest that they default to learning other discriminative features if present. Practically, the suppression or removal of features unrelated to shape may be vital to encouraging the shape learning of networks in a typical object segmentation task. It is worth noting that we observed that the model in scenario 3 (shape+singular) generally segmented both objects in the image. We believe this is because shape+singular was trained on a single foreground object, which may have encouraged the model to identify "any foreground objects of similar size as the targets seen in the training set", even if they have completely different shapes. Based on the results from the experiments in this section, we draw the following conclusions: 1. A UNet-like network does not prioritize learning shape (by default) in the presence of other discriminative features of similar predictive capability (e.g., a unique texture (texture), or being the only object in the image (singular)). 2. A segmentation network learns shape only if the target objects' shapes are the sole discriminative feature. In the next section, we will show that these conclusions extend to complex objects. We will also explore the interactions between shape and other discriminative features. ### Under what circumstances does a network learn shape? (complex object) In the previous section, we investigated shape learning in simple objects. We now extend our experiments to the case where targets are complex objects, a setting more closely resembling real-life scenarios. Like in the previous section, we first explore whether there exists a scenario in which a network exhibits a shape-learning behavior for complex \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline Discriminative features & IOU(\(D_{val}\)) & IOU(\(D_{rm}\)) & IOU(\(D_{aff}\)) & IOU(\(D_{shuf}\)) & SBI \\ \hline Shape only (simp. obj.) & 0.993 & 0.967 & 0.033 & 0.146 & 4.490 \\ \hline shape+texture & 0.990 & 0.288 & 0.900 & 0.641 & 1.238 \\ \hline shape+singular & 0.993 & 0.589 & 0.897 & 0.434 & 1.902 \\ \hline shape+texture+singular & 0.999 & 0.398 & 0.976 & 0.812 & 1.222 \\ \hline \end{tabular} \end{table} Table 1: Performance of UNets trained on datasets containing simple objects with different discriminative features on the probing sets and their SBIs. The raw dataset and \(D_{val}\) are assumed to contain shape+texture+singular as discriminative features. Figure 3: Image of simple obj. with non-shape features (bounded in red) objects. Mimicking the previous experiment, we train a segmentation network on a dataset containing only shape as the discriminative feature. We then generate different combinations of discriminative features and add them to the dataset and observe how they affect SN's ability to learn and leverage shape information. Since we assume targets are complex objects, each image can now contain more sophisticated features. In addition to "singular" and "texture" modifications in the simple objects described in Section 3.1, we also introduce "structure" and "semi-singular" as potential discriminative features: 1. Structure: All simple objects within the target have the same shape and are arranged in a fixed relative position to each other. Their respective textures, unless otherwise specified, are randomly sampled. 2. Semi-singular: There are two polygons sized 100-150 pixels in width and height per image. The target object is the only complex object in each image. The structure and texture of target objects are random. Note that semi-singular and singular are two features that cannot appear simultaneously, as semi-singular assumes the presence of at least two objects of similar sizes to the target object in the image. As the combination of discriminative features becomes more complicated in this experiment, we adjusted the data generation process for \(D_{val},D_{rm},D_{aff}\) and \(D_{shuf}\) to maintain the efficacy of SBI. To avoid confusion, datasets in this experiment to compute SBI are denoted as \(D^{c}_{val},D^{c}_{rm},D^{c}_{aff}\) and \(D^{c}_{shuf}\), distinguished from those in the prior section by superscript 'c' (stands for "complex"). \(D^{c}_{val}\) is generated to simultaneously contain all listed discriminative features in each image, namely singular, texture, and structure. \(D^{c}_{rm}\) is generated to contain both target and non-target complex objects with random structures in textures randomly drawn from CURET database; shape is the only discriminative feature through which target objects may be identified. \(D^{c}_{aff}\) is obtained through rotating/flipping the target objects in \(D^{c}_{val}\) while keeping the structure, texture, and approximate size of the rotated object unaltered. A sample of the three datasets each is listed in Fig. 6. \(D^{c}_{shuf}\) divides \(D^{c}_{val}\) into patches and shuffled them. Table 2 reports the performance of each network trained on the baseline with only shape being discriminative, and on datasets with different combinations of additional discriminative features. According to Table 2, the networks continue to show a good capability to utilize discriminative shape information for segmentation tasks, as indicated by the high SBI and \(D^{c}_{val}\). It also has a significantly smaller performance drop on \(D^{c}_{rm}\) than on \(D^{c}_{aff}\) or \(D^{c}_{shuf}\). We thus conclude that segmentation networks are capable of segmenting through shape, regardless of the targets being simple or complex objects. Figure 4: Image of complex objects with only shape being discriminative Figure 5: Image of complex objects with non-shape features (bounded in red) Similar to in experiments with simple objects, we again see that the features "texture" and "singular" in the training set distract a network from recognizing complex objects by shape. "semi-singular" appears to have similar effects as "singular" based on its performance on each sub-task. As we consider shape+singular teaches a network to recognize any object surrounded by pixels with only one texture representing the background, shape+semi-singular may result in a network identifying any complex object surrounded by pixels with only one texture representing the background. The image on the left of Fig. 7 contains two polygons: one with the desired shape and another with an irrelevant shape. The two polygons form two complex objects in the background, with only the one in the lower-right corner being the target. However, the network trained on "+semi-singular" tends to also consider irrelevant objects as targets. As we include more types of discriminative features in this group of experiments, we also found the following patterns. Let \(\{Shape,F_{0},\dots,F_{i}\}\) be all possible discriminative features presenting in the training set, where \(\{F_{0},\dots,F_{i}\}\) are some non-shape features. In most cases, including \(\{Shape,\ F_{0},\dots,F_{j}\}\cup\{Shape,F_{j+1},\dots,F_{i}\}\) in the training of network (e.g., shape+singular+texture) leads to a network with SBI in-between those of networks trained on \(\{Shape,F_{0},\dots,F_{j}\}\) and \(\{Shape,F_{j+1},\dots,F_{i}\}\) respectively (e.g., shape+singular and shape+texture). In less common cases, such as shape+singular+structure, the multiple non-shape features may collaboratively cause more severe distraction of shape-learning either of it can do alone. However, in no circumstance does a network trained on \(\{Shape,F_{0},\dots,F_{i}\}\) have higher SBI than on both \(\{Shape,\ F_{0},\dots,F_{j}\}\) and \(\{Shape,F_{j+1},\dots,F_{i}\}\). This observation reflects that though shape is generally ignored in the presence of other discriminative features, other non-shape features may be learned collaboratively, rather than one preferred over another. This finding amends the hypothesis made in the study of Hermann et al. [14] that neural networks primarily utilize only one discriminative feature at a time, but only in rare cases multiple features simultaneously. Instead, our findings suggest the simultaneous learning of multiple features is prevalent. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline Discriminative features & IOU(\(D^{c}_{val}\)) & IOU(\(D^{c}_{rm}\)) & IOU(\(D^{c}_{off}\)) & IOU(\(D^{c}_{shuf}\)) & SBI \\ \hline Shape only (complex obj.) & 0.996 & 0.933 & 0.266 & 0.278 & 3.940 \\ \hline shape+singular & 0.987 & 0.522 & 0.857 & 0.614 & 1.628 \\ \hline shape+semi-singular & 0.983 & 0.637 & 0.758 & 0.572 & 1.952 \\ \hline shape+texture & 0.985 & 0.254 & 0.958 & 0.756 & 1.098 \\ \hline shape+structure & 0.998 & 0.474 & 0.589 & 0.466 & 1.896 \\ \hline shape+singular+texture & 0.979 & 0.456 & 0.948 & 0.740 & 1.358 \\ \hline shape+singular+structure & 0.999 & 0.463 & 0.825 & 0.6456 & 1.526 \\ \hline shape+texture+structure & 0.999 & 0.270 & 0.905 & 0.690 & 1.180 \\ \hline shape+texture+singular+structure & 1.000 & 0.458 & 0.928 & 0.761 & 1.356 \\ \hline \end{tabular} \end{table} Table 2: Performance of UNets trained on datasets containing complex objects with different discriminative features on the probing sets and their SBIs Figure 6: Image of complex objects with only shape being discriminative Figure 7: Network trained on shape+singular shows significant failure to exclude objects with undesired shape. Based on the results from experiments on complex objects, we extend our arguments made in Section 3.1 to both the case of simple and complex objects. We also draw additional conclusions: 1. A segmentation network does not prioritize learning shape in the presence of other discriminative features with similar prediction capabilities. This statement holds for both simple and complex objects. 2. Though shape learning is distracted in the presence of other discriminative features, other discriminative features tend to be learned simultaneously if multiple of them are present. ### How do object size and model receptive field affect shape learning? In the previous sections, we learned that networks are capable of learning shape, though doing so requires the absence of undesired discriminative features. We now explore how the size of target objects and model receptive fields may affect shape learning. Geirhos et al.[7] reported that restricting the receptive field size of ResNet would prevent it from converging on stylized ImageNet. Model variants with larger receptive fields seem to suffer less from this phenomenon. This observation suggests a potential correlation between receptive field size and the tendency or capability of CNNs to leverage shape information. We further expand on this topic by systematically analyzing how the size of an object of interest and the model receptive field affect shape learning. By receptive field, we mean the maximum possible number of pixels that can be involved in the computing of a pixel at the bottleneck of the network. We resized the training set for model shape only (simple obj.) mentioned in Fig. 2 to 11 different sizes ranging from \(160\times 160\) to \(480\times 480\) with an interval of 32 pixels. We pair these 11 differently sized training sets each with two UNets of slightly different model architecture. Specifically, the CNN layers in the encoder of one UNet (denoted as UNet_140) all have dilation=1, while the other UNet (denoted as UNet_210) set the CNN layers in encoder block 2,3,4 to have dilations=2. This allows an expansion of receptive field size without changing the total number of parameters for the models. The formula for computing the receptive field size is: * UNet_140: ((((1+4)*2+4)*2+4)*2+4)*2+4 = 140 * UNet_210: (((1+4)*2+9)*2+9)*2+9)*2+4 = 210 We train each [model variant, input size] pair with SGD optimizer (lr = 1e-2, weight_decay= 1e-5, momentum=0.9) until the models fully converge. As shown in Fig. 8, for all input sizes, UNet_210 has a larger SBI than UNet_140. Plus, a clear and sudden drop in SBI as the input size goes larger than 382 is observed for UNet_140, and 420 for UNet_210, when the object size inside each image becomes larger than the models' receptive field sizes. Figure 8: SBIs for UNets with different receptive field sizes when trained on images containing target objects of different sizes. The p-value listed is the p-value for the f-test on the linear regression. A statistically significant (p-value<0.05 for F-test) trend of decrease is observed for both UNet_140 and UNet_210 as the input image size increases. We hence conclude that models with small receptive fields may have difficulty learning shape. Increasing the model receptive field and/or reducing ROI size may alleviate this phenomenon. This increase in SBI would diminish once the receptive field size versus object size reaches a certain threshold. Despite increasing model receptive field size and/or reducing ROI size may encourage shape-learning, there are drawbacks to both operations. Increasing a model's receptive field usually introduces more parameters to it. More parameters may lead to longer training time and a higher risk of overfitting. Using dilated convolutional layers is a potential solution for increasing the model receptive field size without changing parameter sizes. Reducing ROI size reduces image resolution, resulting in the loss of potentially useful shape information. In practice, the hyperparameter turning is an essential step for obtaining a shape-learning segmentation network with high segmentation performance (e.g., IOU, DSC) on its designated task. ### How do input perturbations affect a shape-learning segmentation network? In the previous sections, we explored how the presence of non-shape discriminative features, varying receptive field size, and varying object size can affect a network's shape learning. We now investigate how a shape-learning network responds to feature variations that may be common in a real-life scenario. We still use synthetic data to approach this question and control the degree of feature variation. Specifically, we observe how a shape-learning network responds to brightness changes and shape variations. #### 3.4.1 Perturbations on brightness Brightness change is one of the most prevalent domain shifts in real life that may significantly alter the textures of objects in images. Zhe et al. [36] demonstrated that models trained on data with limited variability in object brightness have poor generalizability to data with other brightness levels. Following common intuition, if a network learns shape, it should be insensitive to the changes in textures, hence be robust to changes in brightness. #### 3.4.2 Perturbations on shape and textures In Section 3.1, we show a segmentation network capable of utilizing unvaried shape information in the absence of other discriminative features. However, in real-life scenarios, two different objects rarely have identical shapes even if they Figure 9: IOU of shape-learning UNet on dataset each with different combinations of foreground and background brightness change. are in the same category. If a shape-learning network shows reasonable performance only when an identical shape appears, it has little advantage over symbolic methods such as template matching. We hence explore how a network trained on identical shapes responds to increasing degrees of shape variation. We also analyze the effects of including shape variation in the training set on SN's shape learning behavior. In doing so, we take \(D_{val}\) used in for simple objects and apply increasing degrees of elastic deformation to the original polygon masks to create different levels of shape variations in the target objects, as shown in Fig. 11. We then denote the dataset derived from \(D_{val}\) as \(\{D_{val}^{1},\dots,D_{val}^{10}\}\), where the superscript denotes the degree of shape variation. We test how models trained on shape only (simple obj.), presented in Section 3.1 respond to such variations. Then, we train three additional models on three modified versions of shape only (simple obj.), each with different degrees of shape variation injected. After training, all four networks with different extents of shape variation (from no variation to some variation) show strictly decreasing IOUs as the degree of elastic deformation increases. According to Fig. 10, models trained with little shape variation have much steeper decreases in performance when tested on perturbed input. These results show that shape-learning networks develop tolerance to shape variations according to variations in target objects' shape during training. On the other hand, the tolerance of shape-learning networks to shape variation may be affected by the extent of shape variation contained in the training set. ### Validation of finds on real data and exploration of augmentations In the previous subsections, we explored factors affecting the shape learning of networks with synthetic data. In this section, we demonstrate how those findings can be applied to real-life tasks. Specially, we assess how well our conclusions on synthetic data align with real-world datasets combined with data augmentation, a common technique used in machine learning to improve the generalizability of models on OOD data. Additionally, we evaluate augmentation methods to encourage shape learning when shape is a good discriminative feature in the dataset. Our selection of potential augmentation methods is inspired by the conclusions drawn from the systematic study with the synthetic data regarding the conditions under which segmentation networks learn shapes. We conduct our experiments on three real datasets that contain targets of similar shape, as listed in Section 2. Each dataset is split into four components: 1. Training set 2. Validation set (serve as \(D_{val}\)), which has a similar appearance to the training set. The performance of model on this dataset is denoted as IOU\({}_{val}\) 3. Noisy data (serves as part of \(D_{rm}\)) obtained by separately adding six different types of noise to \(D_{val}\). The six types of noise are Gaussian noise, shot noise, impulse noise, defocus blur, pixelate, and motion blur. Note that though adding noise may also be a common technique used in data augmentation, we did not inject noise into training data to prevent data leaking. The performance of model on this dataset is denoted as IOU\({}_{noisy}\) Figure 10: Performance of shape-learning UNets trained with training sets containing different degrees of shape variation on shape variations in validation sets 4. OOD data (serves as part of \(D_{rm}\)) with naturally occurring domain shifts, which contains targets of similar shape as those in the training set, but with different visual appearance. The performance of model on this dataset is denoted as IOU\({}_{OOD}\) Recall that three probing sets in addition to the validation set are needed to computer SBI. The other probing datasets \(D_{aff}\) and \(D_{shuf}\) are obtained following the same workflow as shown in Section 1. For each training set, we apply the same group of augmentations, which may be perceived as perturbations to certain discriminative features as below: 1. Color Jitter 2. Separate Color Jitter 3. Neural Style Transfer (NST) 4. Negative Insertion 5. Random Resized Crop 6. Random Crop Reflect Most of the augmentations are used in an attempt to suppress learning texture features. Specifically, Color Jitter changes the brightness, contrast, and saturation of images. Separate Color Jitter is an extension of Color Jitter which uses different parameters for targets and the rest of the image. Neural Style Transfer (NST) [6] changes the Gram matrix of feature vectors. Visually, they significantly alter the texture of images but barely change objects' shape. Negative Insert is designed to suppress singular; it inserts objects with similar textures but different shapes from the targets. In our case, this operation is implemented by cropping the targets into multiple patches and then shuffling and pasting them back to the original image. The inserted objects are not labeled as targets. Random Resized Crop may have complicated effects, as it suppresses both shape and texture - the cropping changes the targets' shapes while the resizing changes texture density. Random Crop Reflect changes only object shape while preserving the texture on each object by padding the cropped patch reflectively. If target objects are mirrored, their masks are also mirrored. Following our conclusions in Section 3.1 and Section 3.2, we expect all augmentations to produce higher model SBI than the baseline (no augmentation), except for Random Resized Crop and Random Crop Reflect, as these two augmentations alter the shape of target objects. #### 3.5.1 Validation with BFGT The complete BFGT dataset contains 922 volumes of breast MRI images produced by GE and SIEMENS machines. We obtain 219 slices (up to 3 slices per volume) produced by the GE machine as our training and validation set. We then select 81 slices (up to 3 slices per volume) produced by SIEMENS as our out-of-domain dataset. Each image is resized to have the same dimension of \(256\times 256\). We report below the mean IOUs and SBI of models trained on data with different augmentations. In experiments with BFGT, augmentations with no modifications on targets' shapes achieved higher SBIs than the baseline model, while those that altered targets' shapes resulted in models with smaller SBIs. This phenomenon matched our prediction. Color Jitter (brightness shifts for grayscale images), Sep. Color Jitter, and NST all lead to models with higher performance on OOD and Noisy data. In contrast, Negative Insertion showed no improvement in performance on OOD nor on Noisy data compared to the baseline, despite its higher SBI. This is because SBI also takes an SN's performance on objects with dissimilar shapes into consideration. Random Resized Crop and Random Crop Reflect alter shape information in the training set. Both models have smaller SBIs than the baseline, plus similar or Figure 11: Ground-truth masks of target objects after applying different intensities of elastic deformation lower performance on OOD and noisy data. The results confirm that augmentation not altering shape improves models' SBIs; Color Jitter and NST are apparently the most practical augmentation method to be applied to encourage shape learning and improve models' generalizability. #### 3.5.2 Validation with FISH The fish dataset contains fish of various species photographed from similar perspectives. We extract fish of 4 species in two families, where species in the same family tend to have similar shapes. Two of the species are gilt-head bream and red sea bream from _Sparidae_ family, while the other two are striped-read mullet and red mullet from _Mullidae_ family. Images of gilt-head bream and striped-red mullet on two types of backgrounds are used as the training and validation set. Afterward, we pick images of red sea bream and red mullet as OOD data. The Noisy FISH dataset is obtained by adding Gaussian noise to the validation set. The results for each augmentation on this dataset are listed in Table 4. In experiments with FISH, Random Crop Reflect and Random Crop Resize result in a further decrease in the already low SBI. The rest of the augmentations unanimously caused a big increase in their respective models' SBIs as predicted. Color Jitter, NST, and Sep. Color Jitter continues to demonstrate improved performance on Noisy and OOD data compared with the baseline. Unlike the other augmentations which increase models' SBI, Negative Insertion shows inferior prediction accuracy even on the validation set. Though there are different augmentation methods that may theoretically encourage shape learning, not all of them will encourage the training of a segmentation network with higher performance and robustness. \begin{table} \begin{tabular}{|l|l|l|l|l|} \hline Training set & IOU\({}_{val}\) & IOU\({}_{noisy}\) & IOU\({}_{OOD}\) & SBI \\ \hline No Augmentation & 0.853 & 0.718 & 0.761 & 3.396 \\ \hline Color Jitter & 0.830 & 0.744 & 0.820 & 3.726 \\ \hline NST & 0.850 & 0.802 & 0.823 & 4.102 \\ \hline Sep. Color Jitter & 0.872 & 0.730 & 0.780 & 3.696 \\ \hline Negative Insertion & 0.867 & 0.717 & 0.748 & 3.996 \\ \hline Random Resized Crop & 0.858 & 0.623 & 0.725 & 2.538 \\ \hline Random Crop Reflect & 0.863 & 0.619 & 0.774 & 2.518 \\ \hline \end{tabular}. \end{table} Table 3: Performance of models trained on datasets with different augmentations when applied to the validation set, dataset with noise injected, and dataset with naturally occurred domain shifts for BFGT Figure 12: Samples from the three partitions of data domain in BFGT The effects of different augmentations on the training of segmentation networks on BFGT and FISH confirm the statement that when non-shape features are suppressed, shape-learning will be encouraged. It also reflects the nature of SBI as a metric measuring how much a network weighs shape in its decision process. Models with higher SBI, or tendency to learn shape, do not necessarily have higher performance on the Noisy and OOD datasets. However, models with higher performance on the Noisy and OOD datasets tend to have high SBIs. #### 3.5.3 Validation with LUNG The LUNG dataset is a collection of axial view lung CTs from three sources. The training and validation set consists of approximately 4000 slices extracted from 111 patient CT volumes from Pulmonary Fibrosis Progression (PFP), with their ground truth lung masks available in "CT Lung & Heart & Trachea segmentation" in Kaggle. We adopt another 90 axial view slices of lung CT volumes from "COVID-19 CT scans" showing lung with pneumonia as naturally occurring OOD data. (Access details in Section 2) The difference in the appearance of the COVID slices from the training set results not only from different imaging parameters but also from different infections. Each image is processed to have dimension \(256\times 256\). We report below the mean IOUs and SBI of models trained on different models. Unlike in BFGT and FISH, as shown in Table 5, augmentations have much smaller effects on the performance of LUNG models. All augmentations that preserve targets' shape yield models with very close performance on Validation, Noisy, and OOD data. Despite this, the effects of augmentations on SBI still align with our assumptions. Again, Random Resized Crop and Random Crop Reflect yield models with smaller SBI than the baseline, while the remaining augmentations increase each model's SBI. \begin{table} \begin{tabular}{|l|l|l|l|l|} \hline Training set & IOU\({}_{val}\) & IOU\({}_{noisy}\) & IOU\({}_{OOD}\) & SBI \\ \hline No Augmentation & 0.998 & 0.847 & 0.978 & 2.294 \\ \hline Color Jitter & 0.996 & 0.884 & 0.990 & 4.014 \\ \hline NST & 0.988 & 0.932 & 0.986 & 4.378 \\ \hline Sep. Color Jitter & 0.997 & 0.889 & 0.994 & 3.882 \\ \hline Negative Insertion & 0.865 & 0.644 & 0.895 & 3.924 \\ \hline Random Resized Crop & 0.991 & 0.768 & 0.710 & 2.188 \\ \hline Random Corp Reflect & 0.949 & 0.677 & 0.667 & 2.068 \\ \hline \end{tabular}. \end{table} Table 4: Performance of models trained on datasets with different augmentations when applied to the validation set, dataset with noise injected, and dataset with naturally occurred domain shifts for FISH Figure 13: Samples from the three partitions of data domain in FISH Augmentations, however, have negligible effects on this dataset. We explain this phenomenon by arguing that non-shape features may persist across multiple domains for LUNG. In LUNG, pixels associated with the lung have a texture style easily distinguishable from those associated with the rest of the image. The cooccurrence of high-intensity contours formed by cartilages also helps locate the lung. These features persist even across different image domains. In this case, the shape may not be the best discriminative feature in prediction capability. Non-shape features may be the more desirable feature sets to learn when optimizing the Cross-Entropy objective function. To avoid overstretching on interpreting the results, we will stop here with the conclusion that though Color Jitter and NST may suppress non-shape features, they barely hurt a model's generalizability on Noisy and OOD data. On all three datasets, we show that suppressing non-shape features encourages shape learning. Our explorations made on synthetic data are hence generalizable to many real datasets. Color Jitter and NST provide superior augmentation performance on both noisy and OOD data but are computationally intensive. Compared to NST, Color Jitter is a faster option but yields a smaller performance boost. ## 4 Conclusion & Discussion In this paper, we defined shape and designed a behavioral metric that measures a segmentation network's tendency to consider shape by measuring its response to variations in input features. With this metric, we explored whether, and under what circumstances, a network can utilize shape as a discriminative feature. Figure 14: Samples from the three partitions of data domain in FISH \begin{table} \begin{tabular}{|l|l|l|l|l|} \hline Training set & IOU\({}_{val}\) & IOU\({}_{noisy}\) & IOU\({}_{OOD}\) & SBI \\ \hline No Augmentation & 0.982 & 0.938 & 0.962 & 2.650 \\ \hline Color Jitter & 0.978 & 0.935 & 0.962 & 2.954 \\ \hline NST & 0.978 & 0.941 & 0.962 & 3.014 \\ \hline Sep. Color Jitter & 0.977 & 0.917 & 0.956 & 2.650 \\ \hline Negative Insertion & 0.981 & 0.931 & 0.953 & 3.022 \\ \hline Random Resized Crop & 0.942 & 0.793 & 0.853 & 2.534 \\ \hline Random Crop Reflect & 0.979 & 0.672 & 0.943 & 2.128 \\ \hline \end{tabular} \end{table} Table 5: Performance of models trained on datasets with different augmentations when applied to the validation set, dataset with noise injected, and dataset with naturally occurred domain shifts for LUNG Our experiments demonstrate that shape is the least prioritized feature to be learned. Learning shape information requires the absence of alternative correlated features. Furthermore, a sufficiently large receptive field size relative to object size is vital for a segmentation network to learn shape. We also concluded that shape learning is indeed a useful property of a network as it helps with out-of-domain generalization. We confirm our results from synthetic data on reproducible real-life scenarios. We showed that Color Jitter and NST as augmentations that encourage shape learning. In addition, our experiments on real data also show that one needs to be careful about when such augmentations should be applied since shape learning does not guarantee better performance on OOD data. We believe that experimental evaluation of neural networks using clear behavioral metrics, which measure the response of the network to systematically designed stimuli, can provide useful practical information on what should be expected from them in real-life scenarios including their notable drawbacks. This said, the specific metric devised in this paper has its limitations. The functionality of this metric relies on the perturbations of chosen discriminative features. We admit that it may potentially ignore discriminative features uninterpretable to humans. However, to the best of our ability, we include multiple representative discriminative features and systematically show their effects on an SN's shape-learning tendency. One should remember that encouraging shape learning is not always beneficial. In some settings, it is the color or the texture that is the most reliable feature. An example of such a scenario is the segmentation of lakes in satellite images. While lakes can have dramatically different shapes, there are similarities in the color and texture of their surface. In such situations, encouraging shape learning through data augmentation may lower a model's performance on OOD data. ## Acknowledgments We would like to express our gratitude to Gregory Szumel, Nicholas Konz, and Hanxue Gu from Mazurowski Lab for their suggestions on the readability of this paper.
2304.09214
SO(2) and O(2) Equivariance in Image Recognition with Bessel-Convolutional Neural Networks
For many years, it has been shown how much exploiting equivariances can be beneficial when solving image analysis tasks. For example, the superiority of convolutional neural networks (CNNs) compared to dense networks mainly comes from an elegant exploitation of the translation equivariance. Patterns can appear at arbitrary positions and convolutions take this into account to achieve translation invariant operations through weight sharing. Nevertheless, images often involve other symmetries that can also be exploited. It is the case of rotations and reflections that have drawn particular attention and led to the development of multiple equivariant CNN architectures. Among all these methods, Bessel-convolutional neural networks (B-CNNs) exploit a particular decomposition based on Bessel functions to modify the key operation between images and filters and make it by design equivariant to all the continuous set of planar rotations. In this work, the mathematical developments of B-CNNs are presented along with several improvements, including the incorporation of reflection and multi-scale equivariances. Extensive study is carried out to assess the performances of B-CNNs compared to other methods. Finally, we emphasize the theoretical advantages of B-CNNs by giving more insights and in-depth mathematical details.
Valentin Delchevalerie, Alexandre Mayer, Adrien Bibal, Benoît Frénay
2023-04-18T18:06:35Z
http://arxiv.org/abs/2304.09214v1
# SO(2) and O(2) Equivariance in Image Recognition with Bessel-Convolutional Neural Networks ###### Abstract For many years, it has been shown how much exploiting equivariances can be beneficial when solving image analysis tasks. For example, the superiority of convolutional neural networks (CNNs) compared to dense networks mainly comes from an elegant exploitation of the translation equivariance. Patterns can appear at arbitrary positions and convolutions take this into account to achieve translation invariant operations through weight sharing. Nevertheless, images often involve other symmetries that can also be exploited. It is the case of rotations and reflections that have drawn particular attention and led to the development of multiple equivariant CNN architectures. Among all these methods, Bessel-convolutional neural networks (B-CNNs) exploit a particular decomposition based on Bessel functions to modify the key operation between images and filters and make it by design equivariant to all the continuous set of planar rotations. In this work, the mathematical developments of B-CNNs are presented along with several improvements, including the incorporation of reflection and multi-scale equivariances. Extensive study is carried out to assess the performances of B-CNNs compared to other methods. Finally, we emphasize the theoretical advantages of B-CNNs by giving more insights and in-depth mathematical details. Convolutional neural networks; steerable filters; Bessel functions; SO(2) invariance; O(2) invariance + Footnote †: journal: Journal of Machine Learning Research N/A (2023) 1-45 + ## 1 Introduction For years now, convolutional neural networks (CNNs) are known to be the most powerful tool that we have for image analysis. Their efficiency compared to classic multi-layers perceptrons (MLPs) mainly comes from an elegant exploitation of the translation equivariance involved in image analysis tasks. Indeed, CNNs exploit the fact that patterns can arise at different positions in images by sharing the weights over translations thanks to convolutions. The translation equivariance can be seen as a particular form of prior knowledge and weights can be saved compared to an MLP architecture with similar performances. By building on the success of exploiting translation equivariance in image analysis, we advocate here that generalizing this to other types of appropriate symmetries can also be useful. For example, in biomedical or satellite imaging, objects of interest can appear at arbitrary positions with arbitrary orientations. To illustrate this, Figure 1 shows four versions of the exact same galaxy that are equally plausible images that could occur in the data set. If the task is to determine the morphology of the galaxy, it is relevant to want these images to be processed in the exact same way. Therefore, introducing rotation equivariance will lead to a more optimal use of the weights and to a better overall efficiency of the models. Being able to guarantee rotation equivariance is also useful to put more trust into models. For instance, experts would be more confident in models that extract the exact same latent features for an object, no matter its particular orientation (of course, depending on the application). Still, introducing other types of equivariance in an efficient way in CNNs is not straightforward. On the one hand, many works propose brute-force solutions like (i) considerably increasing the training set (data augmentation), or (ii) artificially multiplying the number of filters by directly applying the desired symmetries onto them. In practice, these solutions will lead both to an increase of the training time and the size of the models. Furthermore, most of these methods do not provide any mathematical guarantee regarding the equivariance. On the other hand, a few works propose solutions to efficiently bring more general equivariances in CNNs while providing mathematical guarantees. Bessel-convolutional neural networks (B-CNNs) are one of those, and rely on a particular representation of images that is more convenient to deal with rotations and reflections. Figure 1: Four rotated versions of the same galaxy image, retrieved from the Galaxy Zoo data set by Willett et al. (2013). This data set contains images of galaxies as well as their morphologies, according to experts. For this application, the orientation is arbitrary and does not contain any information. In this work, improvements of B-CNNs compared to the initial work of Delchevalerie et al. (2021) are presented, which are mainly an extension to \(O(2)\)-equivariance and an optimal choice for the initial \(\nu_{\max}\) and \(j_{\max}\) meta-parameters based on the Nyquist sampling theorem. Also, we present how multi-scale equivariance can easily be achieved in B-CNNs. Finally, a more extensive study is performed to assess the performances of B-CNNs compared to other state-of-the-art methods on different data sets. To do so, we present the full mathematical developments of B-CNNs and we give more detailed explanations. The advantage of using B-CNNs regarding both the size of the model and the training set is highlighted, along with theoretical and experimental evidences of the equivariance. Our implementation is available online at [https://github.com/ValDelch/B_CNNs](https://github.com/ValDelch/B_CNNs). ## 2 Background and Definition of Invariance and Equivariance Invariance and equivariance are two different notions that need to be clearly defined for the next sections. Let \(\Psi\left(x,y\right)\) be an image where \(x\) and \(y\) represent the pixel coordinates, \(K\left(x,y\right)\) be an arbitrary filter and \(\mathcal{G}\) be a set of transformations that can be applied on the image. The operation defined by \(\ast\) is \(\mathcal{G}\)-invariant if \(\left(g\Psi\right)\left(x,y\right)\ast K\left(x,y\right)=\Psi\left(x,y\right) \ast K\left(x,y\right),\forall g\in\mathcal{G}\), and \(\mathcal{G}\)-equivariant if \(\left(g\Psi\right)\left(x,y\right)\ast K\left(x,y\right)=g\left(\Psi\ast K \right)\left(x,y\right),\forall g\in\mathcal{G}\). In other words, invariance means that the results will be exactly the same for all transformations \(g\) of the input image, while equivariance means that the results will be also transformed by the action of \(g\). Convolutional Neural Networks (CNNs) work by applying a succession of convolutions between an input image \(\Psi\left(x,y\right)\) and some filters. If \(K\left(x,y\right)\) is one of those particular filters, convolutions are expressed by \[\Psi\left(x,y\right)\ast K\left(x,y\right)=\int_{-R}^{R}\int_{-R}^{R}\Psi \left(x-x^{\prime},y-y^{\prime}\right)K\left(x^{\prime},y^{\prime}\right)dx^ {\prime}dy^{\prime},\] where \(R\) defines the size of the filter. One can now show that CNNs exhibit a translation equivariance (that is, patterns are detected the same way regardless of their particular positions). Indeed, if \(\mathcal{T}_{u,v}\) is a translation operator such that it translates the image by an amount of pixels \(\left(u,v\right)\) \[\mathcal{T}_{u,v}\Psi\left(x,y\right)=\Psi\left(x+u,y+v\right),\] one can show that \[\mathcal{T}_{u,v}\left(\Psi\ast K\right)\left(x,y\right) =\int_{-R}^{R}\int_{-R}^{R}\Psi\left(\left(x-x^{\prime}\right)+u,\left(y-y^{\prime}\right)+v\right)K\left(x^{\prime},y^{\prime}\right)dx^{ \prime}dy^{\prime}\] \[=\int_{-R}^{R}\int_{-R}^{R}\Psi\left(\left(x+u\right)-x^{\prime},\left(y+v\right)-y^{\prime}\right)K\left(x^{\prime},y^{\prime}\right)dx^{ \prime}dy^{\prime}\] \[=\int_{-R}^{R}\int_{-R}^{R}\mathcal{T}_{u,v}\Psi\left(x-x^{ \prime},y-y^{\prime}\right)K\left(x^{\prime},y^{\prime}\right)dx^{\prime}dy^{\prime}\] \[=\left(\mathcal{T}_{u,v}\Psi\right)\left(x,y\right)\ast K\left( x,y\right),\] which matches the definition of \(\mathcal{G}\)-equivariance defined earlier, and therefore proves the translation equivariance of CNNs. Figure 2 illustrates this equivariance. However, regarding other types of transformations, the equivariance in CNNs is generally not achieved. One can for example consider rotations, by defining \(R_{\alpha}\) as an operator that applies a rotation of an angle \(\alpha\), such that \[R_{\alpha}\Psi\left(x,y\right)=\Psi\left(x\cos\alpha-y\sin\alpha,x\sin\alpha+y \cos\alpha\right).\] By applying a similar development, it clearly appears that \[R_{\alpha}\left(\Psi*K\right)\left(x,y\right)\neq\left(R_{\alpha}\Psi\right) \left(x,y\right)*K\left(x,y\right).\] This is expected as convolution can be seen as element-wise multiplications with a sliding window, and the result of element-wise multiplications depend on the particular orientation of the matrices. This lack of rotation equivariance will be illustrated in Figure 5(a). ## 3 Related Works Many techniques propose to bring more general equivariance in convolutional neural networks (CNNs). A particular interest was taken in satisfying \(SO(2)\) and \(O(2)\)-equivariance as it is an interesting prior for many applications in image recognition; see for example Chidester et al. (2019) for medical imaging, Dieleman et al. (2015) for astronomical imaging, Li et al. (2020) for satellite imaging and Marcos et al. (2016) for texture recognition. \(SO(2)\) is called the special orthogonal group and contains the continuous set of planar rotations, while \(O(2)\) is called the orthogonal group and also add all the planar reflections. The different proposed methods can be categorized in different groups: (i) methods that only increase robustness to planar transformations without mathematical guarantees of equivariance, (ii) methods that bring some mathematical guarantees but only for a discrete set of planar transformations (as for example, cyclic \(C_{n}\) and dihedral \(D_{n}\) groups), and (iii) methods that bring mathematical guarantees for the continuous set of transformations. The most famous technique from the first category is data augmentation (Quiroga et al., 2018). While robustness can be considerably increased with data augmentation, it still requires for the model to learn the equivariance, as it is not used as an explicit constraint. No theoretical guarantees can then be provided, and extracted features will generally not be the same for rotated versions of a particular object. Next to data augmentation, one can also cite spatial transformer networks by Jaderberg et al. (2015), rotation invariant and Fisher discriminative CNNs by Cheng et al. (2016), deformable CNNs by Dai et al. Figure 2: Illustration of the translation equivariance in CNNs. Both (a) and (b) are made of the same objects, but at different positions. Nevertheless, CNNs process objects the same way independently of their particular absolute positions. (2017), and SIFT-CNNs by Kumar et al. (2018). The main drawback of such methods lies in the fact that, as models still learn the equivariance by themselves, many parameters are used to encode redundant information. Therefore, it leads to methods of category (ii) that aim to make model equivariant to discrete groups like \(C_{n}\) or \(D_{n}\). One can for example cite Group-CNNs by Cohen and Welling (2016), deep symmetry networks by Gens and Domingos (2014), steerable CNNs by Cohen and Welling (2017), steerable filter CNNs by Weiler et al. (2018), dense steerable filter CNNs by Graham et al. (2020), spherical CNNs by Cohen et al. (2018) and Deformation Robust Roto-Scale-Translation Equivariant CNNs by Gao et al. (2021). Compared to category (i), equivariance to a finite number of planar transformations is generally obtained by tying the weights for several transformed versions of the filter. Nevertheless, even if guarantees are now obtained, it is only for a finite set of transformations and it still involves computations with many parameters to encode the equivariance (for example, \(5\times 5\) filters in a \(D_{8}\)-invariant convolutional layer will be made of \(5\times 5\times 8\times 2=400\) parameters1). Finally, for the third category (iii), one can cite general \(E(2)\)-equivariant steerable CNNs (\(E(2)\)-CNNs) by Weiler and Cesa (2019), where equivariance to continuous groups can be obtained by using a finite number of irreducible representations, harmonic networks (HNets) by Worrall et al. (2017) that use spherical harmonics to achieve a rotational equivariance by maintaining a disentanglement of rotation orders in the network, and Finzi et al. (2020) who generalize equivariance to arbitrary transformations from Lie groups. However, authors of \(E(2)\)-CNNs highlight that approximating \(SO(2)\) (resp, \(O(2)\)) by using \(C_{n}\) (resp, \(D_{n}\)) groups instead of using a finite number of irreducible representations leads to better results. It follows that \(E(2)\)-equivariant CNNs are most of the time equivalent to methods of category (ii). Regarding HNets, they are only \(SO(2)\)-equivariant and involve complex values in the network that are poorly compatible with many already existing tools (for example, activation functions and batch normalization layers should be adapted, saliency maps cannot be easily computed, etc.). Footnote 1: However, note that only \(5\times 5=25\) parameters are learnable as the other ones are just transformed versions of the initial filter. Recently, another type of equivariant CNNs also emerged. While symmetries can be seen as a user constraint for all the previously mentioned techniques, these new equivariant CNNs architectures find by themselves during the training phase the symmetries that should be considered. One can for example cite the work of Dehmamy et al. (2021) in this direction. This is particularly useful when users do not know and have no insight about the symmetries that can be involved in data, or when symmetries are unexpected. However, the aim of such methods differs from the previous ones because symmetries are no longer applied as constraints. Therefore, those methods rely more on the training data, and are useful in a different context of applications. A discussion about the strengths and weaknesses of these methods compared to others is provided at the end of the paper, in Section 8. Our work is a direct follow-up of the previous work of Delchevalerie et al. (2021), which built on the use of Bessel functions in order to propose a new method that belongs to the third category. Compared to the state of the art, Bessel-convolutional neural networks (B-CNNs) initially proposed a new original technique to bring \(SO(2)\) equivariance, while being easy to use with already existing frameworks. In this work, we emphasize the theoretical advantages of B-CNNs by giving more mathematical details. Also, further improvements compared to the prior work of B-CNNs are presented, as for example by making them \(O(2)\) and multi-scale equivariant, and automatically inferring optimal choices for some meta-parameters. Finally, a more extensive comparative study is also carried out to highlight the strengths and weaknesses of different methods. ## 4 Using Bessel Functions in Image Analysis In Bessel-convolutional neural networks (B-CNNs), Bessel coefficients are used instead of the raw pixel values conventionally used in vanilla convolutional neural networks (CNNs). This section describes the Bessel functions, and how they can be used to compute these Bessel coefficients. Also, some particular properties of Bessel functions and Bessel coefficients are presented. The aim of this section is to give more insights about the reasons that motivate the use of Bessel functions to achieve different kind of equivariance in CNNs. Compared to the initial work of Delchevalerie et al. (2021), additional mathematical details are provided as well as a discussion on how to perform an optimal choice for the initial meta-parameters \(\nu_{\max}\) and \(j_{\max}\), and how Bessel coefficients can also be used to express reflections. ### Bessel Functions and Bessel Coefficients Bessel functions are particular solutions of the differential equation \[x^{2}\frac{d^{2}y}{dx^{2}}+x\frac{dy}{dx}+\left(x^{2}-\nu^{2}\right)y=0,\] which is known as the Bessel's equation. The solution of this equation can be written as \[y\left(x\right)=AJ_{\nu}\left(x\right)+BY_{\nu}\left(x\right),\] where \(A\) and \(B\) are two constants, and \(J_{\nu}\left(x\right)\) and \(Y_{\nu}\left(x\right)\) are called the Bessel functions of the first and second kind, respectively. It has to be noted that these functions are well-defined for orders \(\nu\in\mathbb{R}\) in general. In B-CNNs, only the Bessel functions of the first kind are used since \(Y_{\nu}\left(x\right)\) diverges for \(x=0\). Indeed, Bessel functions will be used to express images that can take arbitrary values, including at the origin. Examples of Bessel functions of the first kind for different integer orders \(\nu\) can be seen in Figure 3a. From a mathematical point of view, Bessel's equation arises when solving Laplace's or Helmholtz's equation in cylindrical or spherical coordinates. Bessel functions are thus particularly well-known in physics as they appear naturally when solving many important problems, mainly when dealing with wave propagation in cylindrical or spherical coordinates (Riley et al., 2006). Since Bessel functions naturally arise when modeling different problems with circular symmetries in physics, these functions are particularly useful to express more conveniently problems with circular symmetries in other domains. This ascertainment motivated the prior work of Delchevalerie et al. (2021) to express images in a particular basis made of Bessel functions of the first kind. Bessel functions of the first kind can be used to build a particular basis \[\left\{N_{\nu,j}J_{\nu}\left(k_{\nu,j}\rho\right)e^{i\mu\theta},\forall\nu,j \in\mathbb{N}\right\}\text{, where }N_{\nu,j}=1/\sqrt{2\pi\int_{0}^{R}\rho J_{\nu}^{2} \left(k_{\nu,j}\rho\right)d\rho}, \tag{4.1}\] for the representation of images defined in a circular domain of radius \(R\), where \(\rho\) and \(\theta\) are the polar coordinates (the Euclidean distance from the origin and the angle with the horizontal axis, respectively). By carefully choosing \(k_{\nu,j}\), this basis can be made orthonormal for all squared-integrable functions \(f\) such that \(f:D^{2}\subset\mathbb{R}^{2}\longrightarrow\mathbb{R}\) (where the domain \(D^{2}\) is a disk in \(\mathbb{R}^{2}\)). To do so, one can choose \(k_{\nu,j}\) such that \(J_{\nu}^{\prime}\left(k_{\nu,j}R\right)=0\). The proof for the orthonormality of the basis in this case is presented in Appendix A. Another common choice that also leads to orthonormality is to use \(J_{\nu}\left(k_{\nu,j}R\right)=0\). Indeed, these two constraints are suitable since a property of the Bessel functions is that \(J_{\nu}^{\prime}\left(x\right)=\frac{1}{2}\left(J_{\nu-1}\left(x\right)-J_{ \nu+1}\left(x\right)\right)\). Therefore, applying the constraint on \(J_{\nu}\left(x\right)\) or on \(J_{\nu}^{\prime}\left(x\right)\) are both valid solutions that bring orthonormality. However, in our particular case, we choose to apply the constraint on \(J_{\nu}^{\prime}\left(x\right)\) because it makes it more convenient to represent arbitrary functions, as shown by Mayer and Vigneron (1999). The reason is that it exists a solution \(k_{\nu,j}=0\) for \(\nu=0\) such that \(J_{\nu}^{\prime}\left(k_{\nu,j}R\right)=0\), which would not be the case with the constraint based on \(J_{\nu}\left(x\right)\) (see Figure 3). Therefore, the first element in the basis \(N_{0,0}J_{0}\left(k_{0,0}\rho\right)e^{i0\theta}\) will be equal to \(N_{0,0}\). As the result is constant and does not depend on \(\rho\) and \(\theta\), this element can be used to describe an arbitrary constant intensity in \(f\). Figure 4 presents some elements of the basis, including the first one. Also, one can point out that when the order \(\nu\) increases, the angular frequency (the number of zeros along the \(\theta\)-polar-coordinate) of the basis element increases. On the other side, when the order \(j\) increases, the radial frequency (the number of zeros along the \(\rho\)-polar-coordinate) increases. An arbitrary function in polar coordinates \(\Psi\left(\rho,\theta\right):D^{2}\subset\mathbb{R}^{2}\longrightarrow\mathbb{R}\) can be represented in the basis presented in Equation (4.1) as \[\Psi\left(\rho,\theta\right)=\sum_{\nu=-\infty}^{\infty}\sum_{j=0}^{\infty} \varphi_{\nu,j}\ N_{\nu,j}J_{\nu}\left(k_{\nu,j}\rho\right)e^{i\mu\theta}, \tag{4.2}\] Figure 3: Bessel functions of the first kind are presented along with their derivatives for several integer orders \(\nu\in\left\{0,1,2,3\right\}\). where \(\varphi_{\nu,j}\in\mathbb{C}\) are the Bessel coefficients of \(\Psi\left(\rho,\theta\right)\). These \(\varphi_{\nu,j}\) are the mathematical projection of \(\Psi\left(\rho,\theta\right)\) on the Bessel basis. Therefore, they are obtained by \[\varphi_{\nu,j}=\int_{0}^{2\pi}\int_{0}^{R}\rho\left[N_{\nu,j}J_{\nu}\left(k_{ \nu,j}\rho\right)e^{i\nu\theta}\right]^{*}\Psi\left(\rho,\theta\right)d\rho d\theta, \tag{4.3}\] where the element inside the brackets correspond to the element \(\left(\nu,j\right)\) in the Bessel basis. By integrating on \(D^{2}\), it computes the representation of \(\Psi\left(\rho,\theta\right)\) in this basis. In B-CNNs, images are represented by a set of those Bessel coefficients instead of directly using the raw pixel values. Further motivations about this will be given later. However, one can already point out that Equation (4.2) needs in principle an infinite number of Bessel coefficients in order to faithfully represent the initial function \(\Psi\left(\rho,\theta\right)\). From a numerical point of view, these two infinite summations need to be truncated. First of all, one can show that it is not necessary to compute \(\varphi_{\nu,j}\) when \(\nu\) is a negative integer, since \(\varphi_{\nu,j}\) and \(\varphi_{-\nu,j}\) are not independent. Indeed, if \(\nu\in\mathbb{N}\), \(J_{\nu}\left(x\right)\) and \(J_{-\nu}\left(x\right)\) are linked by the relation \[J_{-\nu}\left(x\right)=\left(-1\right)^{\nu}J_{\nu}\left(x\right). \tag{4.4}\] Furthermore, Bessel functions also satisfy \[J_{\nu}\left(-x\right)=\left(-1\right)^{\nu}J_{\nu}\left(x\right), \tag{4.5}\] which means that \(J_{\nu}\) is an even function if \(\nu\) is even, and an odd function otherwise. By injecting Equations (4.4) and (4.5) in Equation (4.3), one can show that (proof can be found in Appendix B) \[\begin{cases}\Re\left(\varphi_{-\nu,j}\right)=\left(-1\right)^{\nu}\Re\left( \varphi_{\nu,j}\right)\\ \Im\left(\varphi_{-\nu,j}\right)=\left(-1\right)^{\nu+1}\Im\left(\varphi_{\nu, j}\right).\end{cases} \tag{4.6}\] Figure 4: Real (a) and imaginary (b) parts of the basis described by Equation (4.1) for different \(\nu\) and \(j\). Red and blue correspond to positive and negative values, respectively. One can see that \(\nu\) is linked to an angular frequency, and \(j\) to a radial frequency. Note that for \(\nu=0\), there is no imaginary part. The infinite summation for \(\nu\) in Equation (4.2) can be decomposed in two summations, one for \(\nu\in\{-\infty,\ldots,-1\}\) and another one for \(\nu\in\{0,\ldots,\infty\}\). By exploiting the link between \(\varphi_{\nu,j}\) and \(\varphi_{-\nu,j}\), the infinite summation for \(\nu\in\{-\infty,\ldots,\infty\}\) can then be reduced to a summation for \(\nu\in\{0,\ldots,\infty\}\), and it is not necessary to compute Bessel coefficients for negative \(\nu\) orders. Finally, in order to truncate the infinite summations, two meta-parameters \(\nu_{\max}\) and \(j_{\max}\) are defined, and the Bessel coefficients are only computed for \(\nu\) (resp. \(j\)) in \(\{0,...,\nu_{\max}\) (resp. \(j_{\max}\))}. Nonetheless, it is difficult to make a good choice for these meta-parameters and this may be rather automated by constraining \(k_{\nu,j}\) with an upper limit. This is clearly supported by Figure 4, as it shows that high \(\nu\) (\(j\), respectively) orders correspond to basis elements with an high angular (radial, respectively) frequency. Therefore, as images are sampled on a discrete Cartesian grid, information about frequencies higher than an upper limit cannot be conserved. This upper limit can be determined by the Nyquist frequency, as done by Zhao and Singer (2013), in order to both minimize the aliasing effect and maximize the amount of information preserved by the Bessel coefficients. From now, let us suppose that the radius \(R\) of an image is arbitrarily set to 1. If the image is made up of \(2n\times 2n\) pixels sampled on a Cartesian grid, it leads to a resolution of \(1/n\). Hence, the sampling rate is \(n\) and the associated Nyquist frequency (the band-limit) is \(n/2\). Therefore, it is optimal to use only the \(\varphi_{\nu,j}\) that satisfy the constraint \[\frac{k_{\nu,j}}{2\pi}\leq\frac{n}{2},\] because those are the only ones that carry information really contained on the finite Cartesian grid. We then define2 Footnote 2: It is interesting to mention that this constraint is also a common choice in numerical physics, where \(\frac{k}{2\pi}=\frac{1}{\lambda}\), \(\lambda\) being the wavelength. It is meaningless to use larger values for \(k\), as it corresponds to wavelengths smaller than the resolution of space. \[k_{\max}=\max_{\nu,j}k_{\nu,j}\text{ s.t. }\frac{k_{\nu,j}}{2\pi}\leq\frac{n}{2}. \tag{4.7}\] One of the consequences of this constraint is that, for larger \(\nu\) orders, a smaller number of Bessel coefficients will be computed, as \(k_{\nu,j}\) will reach \(k_{\max}\) more rapidly. Indeed, the zeros of \(J_{\nu}^{\prime}\left(x\right)\) (that are the \(k_{\nu,j}\)'s if \(R\)=1) are shifted toward higher \(x\) values (see the shifting toward the right for \(J_{\nu}^{\prime}\left(x\right)\) when \(\nu\) increases in Figure 3b). To conclude this section, the function \(\Psi\left(\rho,\theta\right)\) will be represented by a matrix with the general form \[\begin{pmatrix}\varphi_{0,0}&\cdots&\varphi_{0,j}&\cdots&\varphi_{0,j_{\max} }\\ \vdots&\ddots&\vdots&\ddots&\vdots\\ \varphi_{\nu,0}&\cdots&\varphi_{\nu,j}&\cdots&0\\ \vdots&\ddots&\vdots&\ddots&\vdots\\ \varphi_{\nu_{\max},0}&\cdots&0&\cdots&0\end{pmatrix},\] where each non-zero element corresponds to values for \(\nu\) and \(j\) that satisfy \(k_{\nu,j}\leq k_{\max}\). Figure 5 presents an example where \(\Psi\left(\rho,\theta\right)\) is an arbitrary image. Bessel coefficients are computed in this particular case with Equation (4.3), and the inverse transformation described by Equation (4.2) is also performed in the middle part of the figure to check how much information is preserved by the Bessel coefficients. It also shows how easy it is to apply rotations and reflections with Bessel coefficients as explained below, in Section 4.2 and Section 4.3. ### Effect of Rotations To understand why using Bessel coefficients is more convenient than using raw pixel values, one can determine the consequence of a rotation of \(\Psi\left(\rho,\theta\right)\) on \(\varphi_{\nu,j}\). Let \(\Psi^{\mathrm{rot}}\left(\rho,\theta\right)\) be the rotated version of \(\Psi\left(\rho,\theta\right)\) for an angle \(\alpha\in[0,2\pi[\), that is, \(\Psi^{\mathrm{rot}}\left(\rho,\theta\right)=\Psi\left(\rho,\theta-\alpha\right)\). Its Bessel coefficients are given by \[\varphi_{\nu,j}^{\mathrm{rot}}=\int_{0}^{2\pi}\int_{0}^{R}\rho\left[N_{\nu,j} J_{\nu}\left(k_{\nu,j}\rho\right)e^{i\nu\theta}\right]^{*}\Psi^{\mathrm{rot}} \left(\rho,\theta\right)d\rho d\theta.\] By defining \(\theta^{\prime}=\theta-\alpha\), it leads to \[\varphi_{\nu,j}^{\mathrm{rot}}=\int_{0}^{2\pi}\int_{0}^{R}\rho\left[N_{\nu,j} J_{\nu}\left(k_{\nu,j}\rho\right)e^{i\nu\theta^{\prime}}\right]^{*}\Psi\left( \rho,\theta^{\prime}\right)e^{-i\nu\alpha}d\rho d\theta^{\prime}=\varphi_{ \nu,j}e^{-i\nu\alpha}. \tag{4.8}\] Therefore, a rotation of an arbitrary function by an angle \(\alpha\) only modifies its Bessel coefficients by a multiplication factor \(e^{-i\nu\alpha}\). This motivated the development of B-CNNs as it makes rotations conveniently expressed in the Fourier-Bessel transform domain (analogously to how the Fourier transform maps translations to multiplications by complex exponentials). The upper part of Figure 5 illustrates this property (the image is rotated by \(\frac{\pi}{2}\) after multiplying its Bessel coefficients by \(e^{-i\nu\frac{\pi}{2}}\)). ### Effect of Reflections In addition to rotations, Bessel coefficients are also particularly useful when it comes to express reflections. To check this, let \(\Psi^{\mathrm{ref}}\left(\rho,\theta\right)\) be the reflected version of \(\Psi\left(\rho,\theta\right)\) along the vertical axis3. The Bessel coefficients of \(\Psi^{\mathrm{ref}}\left(\rho,\theta\right)=\Psi\left(\rho,\pi-\theta\right)\) are given by Footnote 3: The reflection along the horizontal axis is not needed, since it can be decomposed as a vertical reflection and a rotation of \(\pi\) radians. By composition, reflection equivariance along the horizontal axis is automatically achieved if the layer is equivariant to rotations and reflections along the vertical axis. \[\varphi_{\nu,j}^{\mathrm{ref}}=\int_{0}^{2\pi}\int_{0}^{R}\rho\left[N_{\nu,j} J_{\nu}\left(k_{\nu,j}\rho\right)e^{i\nu\theta}\right]^{*}\Psi^{\mathrm{ref}} \left(\rho,\theta\right)d\rho d\theta.\] Similarly to what is done for arbitrary rotations, one can define \(\theta^{\prime}=\pi-\theta\). This leads to \[\varphi_{\nu,j}^{\mathrm{ref}}=-\int_{\pi}^{-\pi}\int_{0}^{R}\rho\left[N_{\nu,j}J_{\nu}\left(k_{\nu,j}\rho\right)e^{i\nu(\pi-\theta^{\prime})}\right]^{*} \Psi\left(\rho,\theta^{\prime}\right)d\rho d\theta^{\prime}.\] It is shown in Appendix B that \(N_{\nu,j}=N_{-\nu,j}\) and that \(k_{-\nu,j}=k_{\nu,j}\). By exploiting this along with Equation (4.4), one can show that \[\varphi_{\nu,j}^{\mathrm{ref}} =\int_{0}^{2\pi}\int_{0}^{R}\rho\left[N_{-\nu,j}\left(-1\right)^ {\nu}J_{-\nu}\left(k_{-\nu,j}\rho\right)e^{-i\nu\theta^{\prime}}\right]^{*}e^ {-i\nu\pi}\Psi\left(\rho,\theta^{\prime}\right)d\rho d\theta^{\prime}\] \[=\int_{0}^{2\pi}\int_{0}^{R}\rho\left[N_{-\nu,j}J_{-\nu}\left(k_{- \nu,j}\rho\right)e^{-i\nu\theta^{\prime}}\right]^{*}\Psi\left(\rho,\theta^{ \prime}\right)d\rho d\theta^{\prime}\] \[=\varphi_{-\nu,j}. \tag{4.9}\] Figure 5: Decomposition of an arbitrary image in some of its Bessel coefficients. Those Bessel coefficients constitute a particular representation of the image and can be used to recover it thanks to Equation (4.2) (middle part). It also illustrates how Bessel coefficients can be conveniently used to apply rotations (upper part) and reflections (lower part). Therefore, performing a reflection of the image only switches the Bessel coefficients \(\varphi_{\nu,j}\) to \(\varphi_{-\nu,j}\). Thanks to Equations (4.6), it is equivalent to changing the sign of the real (resp. imaginary) part of the Bessel coefficient if \(\nu\) is odd (resp. even). Therefore, in addition to rotations, Bessel coefficients are also really convenient to express reflections. This is illustrated in the lower part of Figure 5, where the image is reflected vertically after switching each \(\varphi_{\nu,j}\) with \(\varphi_{-\nu,j}\). ## 5 Designing Operations with Bessel Coefficients In CNNs, the main mathematical operation is a convolutional product between the different filters and the image (or feature maps if deeper in the network). Each filter sweeps the image locally and the weights are multiplied with the raw pixel values. However, in B-CNNs, the aim is to use Bessel coefficients instead of raw pixel values to benefit from the properties described in the previous sections. Yet, the key operation between the parameters of the network (filters) and the images needs to be adapted. This section first presents the mathematical operation used to achieve equivariance under rotation. After that, the initial work of Delchevalerie et al. (2021) is extended to also achieve equivariance under reflection. ### A Rotation Equivariant Operation The convolution performed in CNNs between an arbitrary image \(\Psi\left(x,y\right)\) and a particular kernel \(K\left(x,y\right)\) defined for \(\left(x,y\right)\in\left[-R,R\right]\times\left[-R,R\right]\) can be written \[a\left(x,y\right)=\Psi\left(x,y\right)*K\left(x,y\right)=\int_{-R}^{R}\int_{- R}^{R}\Psi\left(x-x^{\prime},y-y^{\prime}\right)K\left(x^{\prime},y^{ \prime}\right)dx^{\prime}dy^{\prime}.\] By defining \(\Psi^{\left(x,y\right)}\left(x^{\prime},y^{\prime}\right)=\Psi\left(x-x^{ \prime},y-y^{\prime}\right)\) and converting the integration from Cartesian to polar coordinates, it leads to \[a\left(x,y\right)=\int_{0}^{2\pi}\int_{0}^{R}\Psi^{\left(x,y\right)}\left( \rho,\theta\right)K\left(\rho,\theta\right)\rho d\rho d\theta. \tag{5.1}\] Now, in order to obtain a result that is invariant to the particular orientation of \(\Psi^{\left(x,y\right)}\left(\rho,\theta\right)\), one can decompose it in its Bessel coefficients \(\varphi_{\nu,j}^{\left(x,y\right)}\) and use Equation (4.8) to implement arbitrary rotations. Next, the idea is to combine this with an integration over \(\alpha\) in order to equally consider all the possible orientations of the original image while multiplying it with the kernel, resulting in a rotation invariance. We introduce thus a new rotation equivariant convolutional operation described by \[a\left(x,y\right) =\frac{1}{2\pi}\int_{0}^{2\pi}\big{|}\int_{0}^{2\pi}\int_{0}^{R} \sum_{\nu,j}\varphi_{\nu,j}^{\left(x,y\right)}e^{-i\nu\alpha}N_{\nu,j}J_{\nu} \left(k_{\nu,j}\rho\right)e^{i\nu\theta}K\left(\rho,\theta\right)\rho d\rho d \theta\big{|}^{2}d\alpha\] \[=\frac{1}{2\pi}\int_{0}^{2\pi}\big{|}\sum_{\nu,j}\varphi_{\nu,j} ^{\left(x,y\right)}e^{-i\nu\alpha}\int_{0}^{2\pi}\int_{0}^{R}\left[N_{\nu,j}J _{\nu}\left(k_{\nu,j}\rho\right)e^{-i\nu\theta}\right]^{*}K\left(\rho,\theta \right)\rho d\rho d\theta\big{|}^{2}d\alpha\] \[=\frac{1}{2\pi}\int_{0}^{2\pi}\big{|}\sum_{\nu,j}\varphi_{\nu,j} ^{\left(x,y\right)}e^{-i\nu\alpha}\kappa_{\nu,j}^{*}\big{|}^{2}d\alpha, \tag{5.2}\] where \(\kappa_{\nu,j}\) refers to the Bessel coefficients of the kernel \(K\left(\rho,\theta\right)\). Thanks to the integration over \(\alpha\) from \(0\) to \(2\pi\) and the multiplication of \(\varphi_{\nu,j}^{\left(x,y\right)}\) by \(e^{-i\nu\alpha}\) to describe the effect of rotations, the operation with the kernel is performed for all continuous rotations \(\Psi^{\left(x,y\right)}\left(\rho,\theta-\alpha\right)\) of the original image, where \(\alpha\in\left[0,2\pi\right[\). Therefore, \(a\left(x,y\right)\) should not depend on the particular initial orientation anymore. A squared modulus \(\left|\cdot\right|^{2}\) is introduced in our operation since, without it, one obtains \[a\left(x,y\right)=\frac{1}{2\pi}\sum_{\nu,j}\varphi_{\nu,j}^{\left(x,y\right)} \kappa_{\nu,j}^{\ast}\int_{0}^{2\pi}e^{-i\nu\alpha}d\alpha=\sum_{j}\varphi_{0, j}^{\left(x,y\right)}\kappa_{0,j}^{\ast},\] and only the subset of coefficients \(\left\{\varphi_{0,j}^{\left(x,y\right)},\forall j\right\}\) will contribute to \(a\left(x,y\right)\). This subset alone however does not constitute a faithful representation of the image. The operation without the squared modulus would therefore inevitably lead to an important loss of information. The factor \(\frac{1}{2\pi}\) was introduced finally for normalization purpose. Computing Equation (5.2) seems not straightforward as it requires to perform a numerical integration. However, one can develop it further in order to obtain an analytical solution, which will be much more convenient to implement in practice. To do so, one can first develop the squared modulus given that \[\left|\sum_{i=1}^{k}\alpha_{i}\big{|}z_{i}\big{|}e^{i\theta_{i} }\right|^{2}= \sum_{m,j}\Re\left(\alpha_{m}\right)\Re\left(\alpha_{j}\right) \big{|}z_{m}z_{j}\big{|}\cos\left(\theta_{m}-\theta_{j}\right)\] \[+ \sum_{m,j}\Im\left(\alpha_{m}\right)\Im\left(\alpha_{j}\right) \big{|}z_{m}z_{j}\big{|}\cos\left(\theta_{m}-\theta_{j}\right)\] \[- 2\sum_{m,j}\Im\left(\alpha_{m}\right)\Re\left(\alpha_{j}\right) \big{|}z_{m}z_{j}\big{|}\sin\left(\theta_{m}-\theta_{j}\right), \tag{5.3}\] where \(\alpha_{i}\in\mathbb{C}\) and \(z_{i}=\big{|}z_{i}\big{|}e^{i\theta_{i}}\in\mathbb{C}\). By re-writing the complex valued Bessel coefficients \(\varphi_{\nu,j}^{\left(x,y\right)}\) as \(\big{|}\varphi_{\nu,j}^{\left(x,y\right)}\big{|}e^{i\theta_{\nu,j}}\), it leads to \[a\left(x,y\right)= \frac{1}{2\pi}\int_{0}^{2\pi}\sum_{\begin{subarray}{c}\nu,j\\ \nu^{\prime},j^{\prime}\end{subarray}}\Re\left(\kappa_{\nu,j}^{\ast}\right) \Re\left(\kappa_{\nu^{\prime},j^{\prime}}^{\ast}\right)\big{|}\varphi_{\nu,j} ^{\left(x,y\right)}\varphi_{\nu^{\prime},j^{\prime}}^{\left(x,y\right)}\big{|} \cos\left(\theta_{\nu,j}-\theta_{\nu^{\prime},j^{\prime}}-\alpha\left(\nu-\nu ^{\prime}\right)\right)d\alpha\] \[+ \frac{1}{2\pi}\int_{0}^{2\pi}\sum_{\begin{subarray}{c}\nu,j\\ \nu^{\prime},j^{\prime}\end{subarray}}\Im\left(\kappa_{\nu,j}^{\ast}\right) \Im\left(\kappa_{\nu^{\prime},j^{\prime}}^{\ast}\right)\big{|}\varphi_{\nu,j} ^{\left(x,y\right)}\varphi_{\nu^{\prime},j^{\prime}}^{\left(x,y\right)}\big{|} \cos\left(\theta_{\nu,j}-\theta_{\nu^{\prime},j^{\prime}}-\alpha\left(\nu-\nu ^{\prime}\right)\right)d\alpha\] \[- \frac{1}{\pi}\int_{0}^{2\pi}\sum_{\begin{subarray}{c}\nu,j\\ \nu^{\prime},j^{\prime}\end{subarray}}\Im\left(\kappa_{\nu,j}^{\ast}\right) \Re\left(\kappa_{\nu^{\prime},j^{\prime}}^{\ast}\right)\big{|}\varphi_{\nu,j} ^{\left(x,y\right)}\varphi_{\nu^{\prime},j^{\prime}}^{\left(x,y\right)}\big{|} \sin\left(\theta_{\nu,j}-\theta_{\nu^{\prime},j^{\prime}}-\alpha\left(\nu-\nu ^{\prime}\right)\right)d\alpha.\] In this equation, only the trigonometric functions are \(\alpha\)-dependent. Calculating the remaining integrals leads to \[\int_{0}^{2\pi}\mathrm{sc}\left(\theta_{\nu,j}-\theta_{\nu^{\prime},j^{\prime}} -\alpha\left(\nu-\nu^{\prime}\right)\right)d\alpha=\begin{cases}2\pi\ \mathrm{sc}\left(\theta_{\nu,j}-\theta_{\nu^{\prime},j^{\prime}}\right)\ \mathbf{if}\ \nu=\nu^{\prime}\\ 0\ \mathrm{otherwise},\end{cases} \tag{5.4}\] where sc can represent the cosine or the sine function. Therefore, \[a\left(x,y\right)=\sum_{\nu} \bigg{[}\sum_{j,j^{\prime}}\Re\left(\kappa_{\nu,j}^{*}\right)\Re \left(\kappa_{\nu,j^{\prime}}^{*}\right)\big{|}\varphi_{\nu,j}^{\left(x,y \right)}\varphi_{\nu,j^{\prime}}^{\left(x,y\right)}\big{|}\cos\left(\theta_{ \nu,j}-\theta_{\nu,j^{\prime}}\right)\] \[+\sum_{j,j^{\prime}}\Im\left(\kappa_{\nu,j}^{*}\right)\Im\left( \kappa_{\nu,j^{\prime}}^{*}\right)\big{|}\varphi_{\nu,j}^{\left(x,y\right)} \varphi_{\nu,j^{\prime}}^{\left(x,y\right)}\big{|}\cos\left(\theta_{\nu,j}- \theta_{\nu,j^{\prime}}\right)\] \[-2\sum_{j,j^{\prime}}\Im\left(\kappa_{\nu,j}^{*}\right)\Re\left( \kappa_{\nu,j^{\prime}}^{*}\right)\big{|}\varphi_{\nu,j}^{\left(x,y\right)} \varphi_{\nu,j^{\prime}}^{\left(x,y\right)}\big{|}\sin\left(\theta_{\nu,j}- \theta_{\nu,j^{\prime}}\right)\bigg{]}, \tag{5.5}\] and by using once again Equation (5.3), Equation (5.2) finally leads to \[a\left(x,y\right)=\sum_{\nu}\big{|}\sum_{j}\kappa_{\nu,j}^{*}\varphi_{\nu,j}^ {\left(x,y\right)}\big{|}^{2}. \tag{5.6}\] Thanks to the use of Bessel coefficients instead of raw pixel values, the classic convolution has been modified into Equation (5.6) in order to achieve rotation equivariance. The operation is still a convolution-like operation as the filters still sweep the input image to progressively construct the feature maps. Therefore, feature maps will be obtained in both a translation and rotation equivariant way. Feature maps in B-CNNs are equivariant because they are obtained by a succession of local invariances (in other words, the key operation between the image and the filter in Equation 5.6 is rotation invariant, but as it is successively performed for local parts of the input, it leads to a global rotation-equivariance). Nevertheless, by introducing reduction mechanisms in the models to make the feature maps in the final layer of size \(1\times 1\) (by using pooling layers or avoiding padding), the global equivariance can lead to global invariance. Figure 6 presents an example where the equivariance of B-CNNs is compared to vanilla CNNs, and it also presents how a succession of equivariant feature maps lead in this case to a global invariance of the model. Finally, one can point out that \(a\left(x,y\right)\in\mathbb{R}\) (even if \(\kappa_{\nu,j}\in\mathbb{C}\) and \(\varphi_{\nu,j}^{\left(x,y\right)}\in\mathbb{C}\)). This is an important property as it allows this operation to be compatible with existing deep learning frameworks (for example, classic activation functions and batch normalization can be used), as opposed to the work of Worrall et al. (2017) that uses values in the complex domain. It is also worth to mention that this operation is _pseudo_-injective, meaning that different images will lead to different values of \(a\) (_pseudo_ makes reference to the exception when an image is compared to a rotated version of itself). The proof for the pseudo-injectivity is presented in Appendix C. ### Adding the Reflection Equivariance In order to make B-CNNs also equivariant to reflections, and thus \(O(2)\) equivariant, one can check how Equation (5.6) behaves for an image and its reflection. To do so, let us compute the quantity \[\delta=\sum_{\nu}\big{|}\sum_{j}\kappa_{\nu,j}^{*}\varphi_{\nu,j}\big{|}^{2}- \sum_{\nu}\big{|}\sum_{j}\kappa_{\nu,j}^{*}\varphi_{\nu,j}^{\text{ref}}\big{|} ^{2},\] where \(\varphi_{\nu,j}^{\text{ref}}\) are the Bessel coefficients of a reflected version of \(\Psi\left(\rho,\theta\right)\). For the operation to be invariant under reflection, \(\delta\) should therefore be equal to \(0\). Thanks to Equation (4.9) Figure 6: Feature maps obtained with 4 vanilla (a) and Bessel (b) convolutional layers on a sample drawn from the MNIST data set (LeCun et al., 1998). Input 2 is the exact same image as Input 1, except for a \(\frac{\pi}{2}\) rotation. Inputs are of size \(29\times 29\), and no padding is used such that the size of feature maps is progressively reduced until it reaches the final size of \(1\times 1\). One can observe that no equivariance is obtained in the feature maps for the vanilla CNN, leading to different final results. However, B-CNN provides equivariance for the feature maps, leading to a final result that is invariant to the orientation of the image. Indeed, for B-CNN, feature maps for Input 2 are rigorously identical up to a \(\frac{\pi}{2}\) rotation, which is not the case for vanilla CNN. It has to be noted that in this example, equivariance/invariance are rigorously achieved because \(\frac{\pi}{2}\) rotations are well-defined on Cartesian grids. and Equation (4.6), we can write \[\delta =\sum_{\nu}\big{|}\sum_{j}\kappa_{\nu,j}^{*}\varphi_{\nu,j}\big{|}^{ 2}-\sum_{\nu}\big{|}\sum_{j}\kappa_{\nu,j}^{*}\left(\left(-1\right)^{\nu}\Re( \varphi_{\nu,j})+i\left(-1\right)^{\nu+1}\Im(\varphi_{\nu,j})\right)\big{|}^{2}\] \[=\sum_{\nu}\big{|}\sum_{j}\kappa_{\nu,j}^{*}\varphi_{\nu,j}\big{|}^ {2}-\sum_{\nu}\big{|}\left(-1\right)^{\nu}\sum_{j}\kappa_{\nu,j}^{*}\varphi_{ \nu,j}^{*}\big{|}^{2}\] \[=\sum_{\nu}\left[\big{|}\sum_{j}\kappa_{\nu,j}^{*}\varphi_{\nu,j} \big{|}^{2}-\big{|}\sum_{j}\kappa_{\nu,j}^{*}\varphi_{\nu,j}^{*}\big{|}^{2} \right].\] By using again the development that led to Equation (5.5), one can show that \[\delta=-4\sum_{\nu,j,j^{\prime}}\Im\left(\kappa_{\nu,j}^{*}\right)\Re\left( \kappa_{\nu,j^{\prime}}^{*}\right)\big{|}\varphi_{\nu,j}\varphi_{\nu,j^{\prime }}\big{|}\sin\left(\theta_{\nu,j}-\theta_{\nu,j^{\prime}}\right).\] It means that \(\delta\) may be different from \(0\) and an \(O(2)\) equivariance will in general not be achieved. The objective is now to slightly modify Equation (5.6) in order to obtain \(\delta=0\). To do so, one can see that the terms that do not vanish are those that involve \(\Im\left(\kappa_{\nu,j}^{*}\right)\Re\left(\kappa_{\nu,j^{\prime}}^{*}\right)\). By avoiding such crossed terms between the real and imaginary parts of \(\kappa_{\nu,j}\), one can obtain \(\delta=0\) and therefore a reflection equivariance, while still keeping the rotation equivariance. This can be achieved by using \[a\left(x,y\right)=\sum_{\nu}\big{|}\sum_{j}\Re\left(\kappa_{\nu,j}^{*}\right) \varphi_{\nu,j}^{\left(x,y\right)}\big{|}^{2}+\big{|}\sum_{j}\Im\left(\kappa_ {\nu,j}^{*}\right)\varphi_{\nu,j}^{\left(x,y\right)}\big{|}^{2}. \tag{5.7}\] To conclude, B-CNNs can be made \(SO(2)\) equivariant (that is, equivariant to all the continuous planar rotations) by using Equation (5.6) as operation between the filters and the images, or \(O(2)\) equivariant (that is, equivariant to all the continuous planar rotations and reflections) by using Equation (5.7) instead. Users can decide, based on the application, which equivariance is required. ## 6 Bessel-Convolutional Neural Networks This section constitutes a sum up and gives more intuition about the global working of B-CNNs. It also presents an efficient way for implementing the previous developments in convolutional neural networks architectures. It is finally shown how bringing multi-scale equivariance is straightforward with this implementation. Multi-scale equivariance means that patterns can be detected even if they appear at slightly different scales in the images. Developments in this section are presented in the particular case of \(SO(2)\) equivariance. We will hence consider Equation (5.6) instead of Equation (5.7). However, developments can easily be adapted for this second case. ### B-CNNs From a Practical Point of View The key modification in B-CNNs compared to vanilla CNNs is to replace the element-wise multiplication between raw pixel values and the filters by the mathematical operation described by Equation (5.6). Filters, which are described by their Bessel coefficients \(\{\kappa_{\nu,j}\}\) sweep locally the image and the Bessel coefficients for the sub-region of the image \(\left\{\varphi_{\nu,j}^{\left(x,y\right)}\right\}\) are computed. Equation (5.6) is then used to progressively build feature maps. This process is summarized in Figure 7. However, implementing B-CNNs by using this straightforward strategy requires to perform many Bessel coefficients decompositions, which are really expensive. A more efficient implementation can be obtained by developing Equation (5.6) with Equation (4.3). Indeed, it gives \[a\left(x,y\right) =\sum_{\nu}\big{|}\sum_{j}\kappa_{\nu,j}^{*}\int_{0}^{2\pi}\int_{0 }^{R}\rho\left[N_{\nu,j}J_{\nu}\left(k_{\nu,j}\rho\right)e^{i\nu\theta}\right] ^{*}\Psi^{\left(x,y\right)}\left(\rho,\theta\right)d\rho d\theta\big{|}^{2}\] \[=\sum_{\nu}\big{|}\int_{0}^{2\pi}\int_{0}^{R}\Psi^{\left(x,y \right)}\left(\rho,\theta\right)\sum_{j}\rho\left[N_{\nu,j}J_{\nu}\left(k_{ \nu,j}\rho\right)e^{i\nu\theta}\right]^{*}\kappa_{\nu,j}^{*}d\rho d\theta \big{|}^{2},\] and, by converting to Cartesian coordinates thanks to \(\overset{\sim}{\theta}\equiv\overset{\sim}{\theta}\left(x,y\right)=\arctan \frac{y}{x}\) and \(\overset{\sim}{\rho}\equiv\overset{\sim}{\rho}\left(x,y\right)=\sqrt{x^{2}+y^ {2}}\), it leads to \[a\left(x,y\right)=\sum_{\nu}\big{|}\int_{-R}^{R}\int_{-R}^{R}\Psi^{\left(x,y \right)}\left(x^{\prime},y^{\prime}\right)\sum_{j}\left[N_{\nu,j}\overset{ \sim}{J}_{\nu}\left(k_{\nu,j}\overset{\sim}{\rho}\right)e^{i\nu\theta}\right] ^{*}\kappa_{\nu,j}^{*}dx^{\prime}dy^{\prime}\big{|}^{2}, \tag{6.1}\] where \(\overset{\sim}{J}_{\nu}\left(k_{\nu,j}\rho\right)\) is defined as \[\overset{\sim}{J}_{\nu}\left(k_{\nu,j}\rho\right)=\begin{cases}J_{\nu}\left( k_{\nu,j}\rho\right)\text{ if }\rho\leq R\\ 0\text{ otherwise.}\end{cases}\] Figure 7: Illustration of how feature maps are obtained in B-CNNs. One can see that B-CNNs still work in a convolutional fashion, but the key operation between the filters and the image is modified. This definition of \(\overset{\sim}{J}_{\nu}\) is required to compensate the fact that we are now integrating over the square domain \(\left[-R,R\right]\times\left[-R,R\right]\) instead of the circular domain \(D^{2}\) of radius \(R\). By defining \[T_{\nu,j}\left(x,y\right)=N_{\nu,j}\overset{\sim}{J}_{\nu}\left(k_{\nu,j} \overset{\sim}{\rho}\right)e^{-i\nu\overset{\sim}{\theta}}, \tag{6.2}\] one can finally obtain \[a\left(x,y\right) =\sum_{\nu}\left|\Psi\left(x,y\right)*\sum_{j}T_{\nu,j}\left(x,y \right)\kappa_{\nu,j}^{*}\right|^{2}\] \[=\sum_{\nu}\left|\Psi\left(x,y\right)*F_{\nu}\left(x,y\right) \right|^{2}, \tag{6.3}\] where \[F_{\nu}\left(x,y\right)=\sum_{j}T_{\nu,j}\left(x,y\right)\kappa_{\nu,j}^{*}. \tag{6.4}\] From a numerical point of view, there are two main advantages in using Equation (6.3) instead of directly implementing Equation (5.6) as presented in Figure 7. Firstly, this equation directly involves the input \(\Psi\left(x,y\right)\) instead of its Bessel coefficients. Secondly, \(T_{\nu,j}\left(x,y\right)\) does not depend on the input or the weights of the model. Therefore, it can be computed only once at the initialization of the model. After discretizing space, \(T_{\nu,j}\left(x,y\right)\) can be seen as a transformation matrix that maps the weights of the model from the Fourier-Bessel transform domain (the Bessel coefficients of the filters \(\left\{\kappa_{\nu,j}\right\}\)) to a set of filters in the direct space \(\left\{F_{\nu}\left(x,y\right)\right\}\). The feature maps can then be obtained by applying classic convolutions between the input and these filters. Also note that the \(\nu_{\max}\) convolutions that need to be performed can be wrapped with the output channel dimension to only perform one call to the convolution function. Algorithm 1 presents how to efficiently implement a B-CNN layer in practice, including the initialization step and the forward propagation. ### Numerical Complexity of B-CNNs Regarding the computational complexity, if the input of a vanilla CNN layer is of size \(\left[W\times H\times C_{in}\right]\) and if it implements \(C_{out}\) filters of size \(\left[2n\times 2n\right]\), the number of mathematical operations to perform for a forward pass (assuming that padding is used along with unitary strides) is \[N_{op} =WH\left[4n^{2}C_{in}+\left(4n^{2}C_{in}-1\right)\right]C_{out}\] \[=8WHn^{2}C_{in}C_{out}-WHC_{out}.\] Indeed, \(C_{out}\) filters made of \(2n\times 2n\times C_{in}\) parameters will sweep \(WH\) local parts of the input image. For each local part, \(4n^{2}C_{in}\) multiplications are then performed as well as \(4n^{2}C_{in}-1\) additions. Therefore, it leads to a computational complexity of \(\mathcal{O}\left(WHn^{2}C_{in}C_{out}\right)\). Compared to vanilla CNNs, B-CNNs need to perform more operations as it is required (i) to compute \(F_{\nu}\left(x,y\right)\), (ii) to perform \(2\nu_{\max}\) times more convolutions and (iii) to compute squared modulus and a sum over \(\nu\). Step (i) consists of \(\nu_{\max}\) matrix multiplications, that involve for each element in the final matrix \(j_{\max}\) scalar multiplications and \(j_{\max}-1\) additions. Step (ii) is the same that for vanilla CNNs, except that one should perform this for each \(\nu\) and both for the real and imaginary parts of \(F_{\nu}\left(x,y\right)\). Finally, the squared modulus in step (iii) involves \(2WHC_{out}\) multiplications and \(WHC_{out}\) additions, and the final summation over \(\nu\) involves \(\nu_{\max}-1\) additions. At the end, the final numbers of operations to perform for each step are \[N_{op}^{(i)} =\nu_{\max}\left[j_{\max}+\left(j_{\max}-1\right)\right]4n^{2}C_{in} C_{out},\] \[N_{op}^{(ii)} =WH\left[4n^{2}C_{in}+\left(4n^{2}C_{in}-1\right)\right]2C_{out} \nu_{\max},\] \[N_{op}^{(iii)} =3WHC_{out}\nu_{\max}+\left(\nu_{\max}-1\right)WHC_{out}.\] However, by looking at Figure 8, one can see that both \(\nu_{\max}\) and \(j_{\max}\) scale linearly with \(n\), thanks to the constraint expressed by Equation (4.7). It follows that \[N_{op}\propto n^{4}C_{in}C_{out}+WHn^{3}C_{in}C_{out}+WHnC_{out},\] resulting in a computational complexity of \(\mathcal{O}\left(WHn^{3}C_{in}C_{out}\right)\) for a forward pass in a B-CNN layer. Since generally \(n\ll\min\left(W,H,C_{in},C_{out}\right)\), the increase in computational time compared to vanilla CNNs is reasonable with respect to the gain in expressiveness. Furthermore, the computational complexity of \(E(2)\)-equivariant models (Weiler and Cesa, 2019) for a symmetry group \(\mathcal{G}\) is \(\mathcal{O}\left(WHn^{2}C_{in}C_{out}\big{|}\mathcal{G}\big{|}\right)\) as \(C_{out}\) is artificially increased by the number of discrete operations in \(\mathcal{G}\). Therefore, if \(n<\big{|}\mathcal{G}\big{|}\) (which is generally the case as \(n=5,7\text{ or }9\), and \(\mathcal{G}=C_{8},C_{16},D_{8}\text{ or }D_{16}\) leading to \(\big{|}\mathcal{G}\big{|}=8,16,16\text{ or }32\), respectively), B-CNNs are therefore more efficient from a computational point of view. ### Rotation Equivariance From a Numerical Point of View As opposed to most of the state-of-the-art methods, B-CNNs do not rely on a particular discretization of the continuous (\(S\)-)\(O(2)\) group. The equivariance is automatically guaranteed by processing the input image thanks to an (\(S\)-)\(O(2)\) equivariant mathematical operation, which replaces the simple convolution in the direct space. It follows that B-CNNs directly provide theoretical guarantees regarding the equivariance to the continuous set of rotation angles \([0,2\pi[\). Indeed, as the Bessel coefficients of the filter \(\{\kappa_{\nu,j}\}\) are not computed but defined as the learnable parameters of the model, it does not involve any numerical error. Furthermore, Equation (5.6) is rotation invariant and this independently of the number of Bessel coefficients used (that is, independently of \(k_{\max}\)). However, one should mention that, from a numerical point of view, exact (\(S\)-)\(O(2)\) equivariance is rarely possible due to the Figure 8: Relation between \(n\) and \(\nu_{\max}/j_{\max}\) thanks to Equation (4.7). discrete nature of numerical images. Indeed, \(\Psi\left(x,y\right)\) is only known on a finite Cartesian grid, and rotations of angles in \(\left[0,2\pi\right[\ \backslash\ \left\{0,\frac{\pi}{2},\pi,\frac{3\pi}{2}\right\}\) are not well defined, and will result in numerical errors. The only source of errors in B-CNNs regarding the \((S\)-\()O(2)\) equivariance lies in the discretization of \(\Psi^{\left(x,y\right)}\left(x^{\prime},y^{\prime}\right)\) on an \(\left[2n\times 2n\right]\) Cartesian grid, which is involved by Equation (6.1). Therefore, numerical errors may be reduced by increasing \(n\), that is, the size of the filters. ### Adding a Multi-Scale Equivariance Previous sections focus on achieving \(SO(2)\) and \(O(2)\) equivariance. However, for particular applications, patterns of interest may also vary in scale. To illustrate this, see for example biomedical applications where tumors may be of different sizes. Prior works (Xu et al., 2014; Li et al., 2019; Ghosh and Gupta, 2019) already showed that a multi-scale equivariance can be incorporated into CNNs, leading to better performances for such applications. The aim of this section is to present how these prior works can be easily transposed to the particular case of B-CNNs. As the size of the filter in the direct space is determined by the discretization of \(T_{\nu,j}\left(x,y\right)\), it is easy in B-CNNs to implement already-existing scaling invariance techniques. To do so, we only need to pre-compute multiple versions of \(T_{\nu,j}\left(x,y\right)\) for different kernel sizes, and only keep the one with the highest response. More formally, the idea is to define multiple transformation matrices \(T_{\nu,j}^{n}\left(x,y\right)\) that act on circular domains of different size \(n\). Those matrices can be pre-computed at initialization. They can then be used to project the filters in the Fourier-Bessel transform domain to filters of different sizes in the direct space. One can then consider keeping only the most active feature maps. The process is summarized in Figure 9. ## 7 Experiments This section presents the details of all the experiments performed to assess and to compare the equivariance obtained with B-CNNs with other state-of-the-art methods. The data sets used are presented, as well as the experimental setup. After that, quantitative results are presented for each data set. ### Data sets Three data sets are used to assess the performances in different practical situations: * The MNIST (LeCun et al., 1998) data set is a classical baseline for image classification. This data set is made of \(28\times 28\) grayscale images of handwritten digits that belong therefore to one out of 10 different classes. More precisely, four variants of this data set are considered: (i) MNIST, (ii) MNIST-rot, (iii) MNIST-back and (iv) MNIST-rot-back4. In the _rot_ variants, images are randomly rotated by an angle \(\alpha\in\left[0,2\pi\right[\). In the _back_ variants, a patch from a black and white image was used as the background for the digit image. This adds useless information that can be disturbing for some architectures. All these MNIST data sets are perfectly balanced. * The Galaxy10 DECals data set is a subset of the original Galaxy Zoo data set (Willett et al., 2013). This data set is initially made of \(256\times 256\) RGB images of galaxies that belong to one out of 10 roughly balanced classes, representing different possible morphologies according to experts. Images are resized to \(128\times 128\) in our work for computational resources purpose. * The Malaria (Yu et al., 2020) data set is made of \(64\times 64\) RGB microscope images of blood films. Those images belong to two perfectly balanced classes highlighting the presence or not of the parasites responsible for Malaria. An overview for all those three data sets is presented in Table 1, along with visual examples. ### Experimental Setup In order to perform this empirical study, (i) \(E(2)\)-equivariant CNNs from Weiler and Cesa (2019) (\(E(2)\)-CNNs), (ii) Harmonic Networks from Worrall et al. (2017) (HNets) as well as (iii) vanilla CNNs are considered along with our method (B-CNNs). This choice is motivated by the fact that \(E(2)\)-CNN and HNets constitute the state of the art for constraining CNNs with known symmetry groups. Each technique is tested in different setups (mainly, for different symmetry groups or different representations of the same group). The different setups for each method are described below: Figure 9: This figure presents how B-CNNs can handle multi-scale equivariance. In this case, a single filter represented by its Bessel coefficients \(\{\kappa_{\nu,j}\}\) is projected in the direct space thanks to different transformation matrices. The filter is mapped to \(\nu\) filters of size \(7\times 7\), \(9\times 9\) and \(11\times 11\). Max pooling is then used to only keep the most responding feature maps. Note that only the real parts of the projected filters are represented here, for convenience. * For \(E(2)\)-CNNs, we consider the discrete \(C_{4}\) (\(\{n\frac{\pi}{2}\}_{n=1}^{4}\) rotations) and \(C_{8}\) (\(\{n\frac{\pi}{4}\}_{n=1}^{8}\) rotations) symmetry groups using a regular representation, as well as the continuous one, \(SO(2)\) (all the continuous rotations) and \(O(2)\) (all the continuous rotations and the reflections along vertical and horizontal axes), using irreducible representations. Those setups are a subset of all the setups tested by the authors of \(E(2)\)-CNNs. More details about this and how \(E(2)\)-CNNs work can be found in the work of Weiler and Cesa (2019). Furthermore, the authors provide an implementation for \(E(2)\)-CNN that has been used in this work. * For HNets, similarly to what the authors did in their work, two different setups to achieve \(SO(2)\) invariance are tested using an approximation to the first and second order. In this work, we use again the implementation provided by the authors of \(E(2)\)-CNNs, who re-implement HNets in their own framework, for convenience. * Regarding our B-CNNs, four setups are considered to achieve \(SO(2)\) or \(O(2)\), with or without scale invariance (denoted by the presence or not of "+" in our tables and figures), with the computation of \(k_{\max}\) as described by Equation (4.7). Another setup for \(SO(2)\) invariance with a stronger cutoff frequency, that corresponds to half \begin{table} \begin{tabular}{|l|c|c|c|c|c|} \hline **Data set** & \(N\) & \(C\) & **Task** & **Resolution** & **Examples** \\ \hline MNIST & & & & & \\ & & & & & \\ -rot & & & & & \\ & \(62,000\) & \(10\) & Multi-class classification & \(28\times 28\times 1\) & \\ -back & & & & & \\ -rot-back & & & & & \\ \hline Galaxy10 & \(17,736\) & \(10\) & Multi-class classification & \(128\times 128\times 3\) & \\ \hline Malaria & \(27,558\) & \(2\) & Binary classification & \(64\times 64\times 3\) & \\ \hline \end{tabular} \end{table} Table 1: Overview of the data sets. \(N\) is the total number of available data, and \(C\) is the number of classes. the initial \(k_{\max}\), is also considered. This last setup is motivated by the empirical observation that it often leads to better performances. * Finally, a vanilla CNN with the same architecture than for the other methods, as well as a ResNet-18 (He et al., 2016) are also trained for reference. The architectures are inspired from the work of Weiler and Cesa (2019) and are presented in a generic fashion in Table 2. Note that the size of the filters is larger than conventional sizes in CNNs. The reason why it is preferable to increase the size of the filters in those cases is explained in Section 6.3. The same template architecture is used for all the methods (except for the ResNet-18 architecture that is kept unmodified) and data sets. Nonetheless, minor modifications are sometimes performed. Firstly, the number of filters in each convolutional layer should be adapted from one method to another, in order to keep the same number of trainable parameters. To do so, a parameter \(\lambda\) is introduced to manually scale the number of filters and guarantee the same number of trainable parameters for all the methods. To give an idea, \(\lambda\) is arbitrarily set to 1 for B-CNNs with the soft cutoff frequency policy, and the corresponding number of trainable parameters is close to \(115,000\). Secondly, \(E(2)\)-CNNs require a particular operation called _invariant projection_ before applying the dense layer. This specific operation is not performed for the other methods. Thirdly, each convolutional layer is followed by a batch-normalization and a _ReLU_ activation function, except for B-CNNs. Indeed, we empirically observed that both the batch-normalization and the _ReLU_ activation function generally decrease convergence for B-CNNs, while this is not the case for the other methods. Therefore, we use another type of batch-normalization layer that is introduced by Li et al. (2021) as well as _softsign_ activation functions, which seems to perform better in our case. Note that we are still not able to really understand why classic batch-normalization and _ReLU_ activation functions reduce the performances of B-CNNs. Finally, the padding and the final layer are adapted according to the considered data set, as they involve different tasks and image sizes. As it is expected that constraining CNNs with symmetry groups becomes more useful when less data are available (as CNNs should no more learn the invariances by themselves), experiments are performed in (i) High, (ii) Intermediate and (iii) Low data settings for each data set, with different data augmentation strategies. The different data settings correspond to different sizes for the training sets. Attention is paid to keep the same percentage of samples of each target class, in order to avoid biases. For the MNIST data sets, those settings correspond to the use of (i) 20%, (ii) 2% and (iii) 0.2% of the total number of available data for training. On top of this, three different data augmentation policies are tested. Firstly, models are trained on MNIST-rot(-back) using online data augmentation (by performing random rotation before being given as input). Secondly, models are still trained on MNIST-rot(-back) but without further data augmentation (the same image is always seen by the model in the same orientation). Thirdly, models are trained on MNIST(-back) while being tested on random rotated versions of the test images. Those setups allow us to see how the amount of data impact the performance of the models, and how much the models still rely on the training phase to achieve the desired invariances. For the other data sets, the different data settings correspond to the use of (i) 80%, (ii) 8% and (iii) 0.8% of the total number of available training data, respectively. As for the MNIST data sets, models are again trained with and without using data augmentation. However, in this case, the data augmentation also performs random planar reflections (not pertinent for MNIST). Also note that only two data augmentation policies are possible, because non-rotated version of images for Galaxy10 DECals and Malaria is meaningless (as opposed to MNIST where digits have a well-defined orientation, a priori). For the High and Intermediate data setting experiments, models are trained using the Adam optimizer through 50 epochs. A warm-up cosine decay scheduler that progressively increase the learning rate from 0 to 0.001 during the first 10 epochs before slowly decreasing it to 0 following a cosine function during the remaining epochs is used. For the low data setting experiments, 150 epochs are performed, with the warm-up phase during the first 30 epochs. Each experiment is performed on 3 independent runs. ### Results on MNIST(-rot) Table 3 presents the results obtained on the MNIST(-rot) data sets. For the sake of completeness, Figure 10 also presents all the corresponding training curves. From a general point of view, one can observe that a proper use of \(E(2)\)-CNNs, HNets and B-CNNs can lead to better performances than vanilla CNNs, even if the number of parameters is much smaller in the case of equivariant models (\(\pm 115,000\) parameters against \(\pm 11,000,000\) for ResNet-18). Vanilla CNNs techniques are only able to compete with equivariant models in high data settings, and when performing data augmentation (first \begin{table} \begin{tabular}{|l c|c|c|c|} \hline Layer & \# C & MNIST(-rot)(-back) & Galaxy10 DECals & Malaria \\ \hline _Conv layer_\(9\times 9\) & \(8\lambda\) & pad 4 & pad 0 & pad 4 \\ _Conv layer_\(7\times 7\) & \(16\lambda\) & pad 3 & pad 0 & pad 3 \\ Av. pool. \(2\times 2\) & - & pad 0 & pad 0 & pad 0 \\ _Conv layer_\(7\times 7\) & \(24\lambda\) & pad 3 & pad 0 & pad 3 \\ _Conv layer_\(7\times 7\) & \(24\lambda\) & pad 3 & pad 0 & pad 0 \\ Av. pool. \(2\times 2\) & - & pad 0 & pad 0 & pad 0 \\ _Conv layer_\(7\times 7\) & \(32\lambda\) & pad 3 & pad 0 & pad 0 \\ _Conv layer_\(7\times 7\) & \(40\lambda\) & pad 0 & pad 0 & pad 0 \\ (_Inv. projection_) & - & - & - & - \\ Global av. pool. & - & - & - & - \\ Dense layer & \(\rightarrow\) & 10, _softmax_ & 10, _softmax_ & 2, _softmax_ \\ \hline \end{tabular} \end{table} Table 2: Generic architecture used for the different data sets. _Conv layer_ can either be vanilla-conv, B-conv, \(E(2)\)-conv or HNet-conv. After each _Conv layer_, a batch-normalization as well as an activation function is applied (_softsign_ for B-conv, _ReLU_ for others). The _Invariant projection_ is only required in \(E(2)\)-CNNs. As the number of parameters in a filter may differ between the different methods, a parameter \(\lambda\) is introduced. This parameter is fixed for each methods in order to tweak the number of filters (# C) so that the total numbers of trainable parameters are as close as possible to each others. Figure 10: Learning curves obtained for the different methods on the MNIST-rot and MNIST data sets. Those learning curves are averaged over 3 independent runs. The legend is the same for all the graphs. The symbols \(*\), \(\dagger\) and \(\star\) refer to the use of \(E(2)\)-CNNs, HNets and B-CNNs, respectively. column). This clearly highlights the fact that vanilla CNNs are sensitive to quality and the amount of data in order to learn the invariances. Furthermore, even in the most favorable situation for vanilla CNNs, convergence is much slower than for equivariant models. Next, by taking a closer look at the equivariant models, it appears that the \(E(2)\)-CNNs that use the straightforward discrete groups \(C_{4}\) and \(C_{8}\) perform quite well, and are even the best performing models when used along with data augmentation (first 3 columns). However, performances fall a little on the MNIST-rot data set without data augmentation (middle 3 columns), and becomes really bad compared to the \((S-)O(2)\) equivariant models when trained on the MNIST data set, when they cannot see rotated versions of the digits (last 3 columns). Even if those models are better than vanilla CNNs, it also appears that they still rely on training to learn really continuous rotation invariance, which was something expected. It is also interesting to mention that using a symmetry group that is not appropriate may be worse than not using any symmetry group at all. For example, in high data setting with data augmentation, vanilla CNNs perform better than \(O(2)\)-based models. Finally, almost all the \((S-)O(2)\) equivariant models seem to achieve very similar performances on MNIST-rot, with and without data augmentation. Nonetheless, one can observe that the \(SO(2)\) B-CNNs with the strong cutoff policy is always in the top-3 performing models, and achieve significantly better results than all the other models when only trained on MNIST (last 3 columns). This highlights the fact that the \(SO(2)\) invariance achieved by design in the B-CNNs is stronger than the one achieved by other models, allowing generalization to rotated versions of digits, even if none of those are observed during training. ### Results on MNIST(-rot)-back Table 4 presents the results obtained on the MNIST(-rot)-back data sets, which are variants of the MNIST data set with randomly rotated digits and black and white images as background. For the sake of completeness, Figure 11 also presents all the corresponding training curves. For vanilla CNNs, the conclusions are the same as for MNIST(-rot). Performances quickly drop when using a smaller amount of data, or when data augmentation is not performed properly. Now, by opposition to the observation for the MNIST(-rot) data set, it appears that B-CNNs are from a general point of view significantly better than other equivariant models, in each setup. For the high data setting on MNIST-back (without seeing rotated images during training), Figure 11 clearly reveals several groups of plateau corresponding to vanilla CNNs, discrete \(E(2)\)-CNNs, \(O(2)\)\(E(2)\)-CNNs, \(SO(2)\)\(E(2)\)-CNNs and HNets, and finally all the B-CNNs models with the \(SO(2)\) ones being in top of them. Again, in addition to the better performances, one should also highlight a faster convergence for the B-CNNs. ### Results on Galaxy10 DECals Table 5 presents the results obtained on the Galaxy10 DECals data set. For the sake of completeness, Figure 12 also presents all the corresponding training curves. Figure 11: Learning curves obtained for the different methods on the MNIST-back and MNIST-rot-back data sets. Those learning curves are averaged over 3 independent runs. The legend is the same for all the graphs. The symbols \(*\), \(\dagger\) and \(\star\) refer to the use of \(E(2)\)-CNNs, HNets and B-CNNs, respectively. Figure 12: Learning curves obtained for the different methods on the Galaxy10 DECals data set. Those learning curves are averaged over 3 independent runs. The legend is the same for all the graphs. The symbols \(*\), \(\uparrow\) and \(\star\) refer to the use of \(E(2)\)-CNNs, HNets and B-CNNs, respectively. From those results, one can see again that using the symmetry group \(C_{4}\) and \(C_{8}\) already allow \(E(2)\)-CNNs to achieve very good results. For Galaxy10 DECals, this observation stands for each setup, even without using data augmentation. \(SO(2)\)-based B-CNNs with the strong cutoff policy is again one of the best performing model when used without data augmentation. It is interesting to see that using the \(O(2)\) group does not always lead to better performances compared to results obtained using \(SO(2)\), despite the fact planar reflections are meaningful for this application. ### Results on Malaria Table 6 presents the results obtained on the Malaria data set. For the sake of completeness, Figure 13 also presents all the corresponding training curves. Interestingly, for this data set, B-CNNs seem to perform slightly worse than other methods. However, one can see that the ResNet-18 is among the best performing models in high data setting with data augmentation and remains competitive in other settings. This highlights the fact that \((S-)O(2)\) invariance may be less useful than for the other tested applications. Still, performances are most of the time very close to each others. ## 8 Discussion This section first provides a global discussion regarding the different experiments and results presented in the previous section. Then, we discuss the choice of using models with automatic symmetry discovery, or models as B-CNNs based on applying (strong) constraints to guarantee user-defined symmetry. ### Global Discussion of the Results From the experiments and the preliminary discussions in the previous section, several insights can be retrieved. Equivariant models vs. vanilla CNNsIn the particular case where many data are available, vanilla CNNs do a very decent job by being only marginally below the top-accuracy. Thanks to data augmentation, those models seem to be able to learn meaningful invariances. However, by taking a look at the training curves, it appears clearly that convergence is much slower. This is easily explained by the fact that vanilla CNNs should learn the invariances, while it is not the case for equivariant models. Therefore, computation time and energy may be saved by using instead an equivariant model with training through much less epochs. Furthermore, this drawback is emphasized when vanilla CNNs should work with less data and/or without data augmentation, up to leading to very poor performances in those cases. Discrete vs. continuous groupsAs already spotted by Weiler and Cesa (2019), using discrete groups already largely improve performances, and even sometimes constitute the best performing models. Nonetheless, experiments also highlight that it does not guarantee equivariance and often still requires a larger amount of data as well as data augmentation. Figure 13: Learning curves obtained for the different methods on the Malaria data set. Those learning curves are averaged over 3 independent runs. The legend is the same for all the graphs. The symbols \(*\), \(\dagger\) and \(\star\) refer to the use of \(E(2)\)-CNNs, HNets and B-CNNs, respectively. B-CNNs vs. other equivariant modelsIn our experiments, B-CNNs are most of the time at least able to achieve state-of-the-art performances. In low data settings, they are often the best performing models. In particular, B-CNNs with strong cutoff policies seem very efficient. They achieve top-1 accuracy in 8 setups and top-3 accuracy in 17 setups, among a total of 30 different setups (all the columns for all the data sets). Now, by considering all the B-CNNs model at the same time, they achieve together top-1 accuracy in 16 setups and top-3 accuracy in 22 setups. For comparison, it is better than \(E(2)\)-CNNs and HNets, which achieve top-1 accuracy in 9 and 5 setups, and top-3 accuracy in 22 and 13 setups, respectively. Also, from the experiments on MNIST and MNIST-back (no rotation during training), one can see that the invariance achieved by design in B-CNNs is better than for other methods as performances for example for the strong cutoff policy are significantly above. Finally, we can observe that B-CNNs are able to achieve good performances even without data augmentation, which is not/less the case for other methods. As data augmentation increases the computational cost of training (because of the increase of the number of training data and/or the number of training epochs), B-CNNs may therefore be a more favorable approach. ### Automatic Symmetry Discovery vs. Constraints-based Models This work focuses on constraint-based models that assume that users know _a priori_ the appropriate symmetry group(s) for the application at hand. However, it can sometimes be hard to obtain this prior knowledge as it requires good understanding of the data/application. Methods like the one proposed by Dehmamy et al. (2021) (L-CNNs) therefore attempt to infer the invariance(s) that need to be enforced automatically during training. Here, we advocate that the constraint-based approach remains relevant and often competitive. Firstly, B-CNNs and other constraint-based methods can be adapted to handle symmetries in a data-driven fashion. For example, one can consider a method similar to the one in Section 6.4 to let the model choose meaningful symmetries. By designing the architecture with multiple networks in parallel that provide different invariances (one could simultaneously consider \(SO(2)\), \(O(2)\), \(SO(2)+\) and \(O(2)+\)), the model can benefit from multiple views of the problem and use the features that are the most relevant. In a data-driven fashion, the relevant part(s) of the network will be retained so as to enforce appropriate invariance(s). This approach does not rely on the hypothetical ability of vanilla CNNs to learn specific types of invariance, but rather builds on models that are designed for that. Secondly, B-CNNs and other constraint-based methods have the advantage to guarantee specific invariances. Instead of using data augmentation and relying on a proper learning of the invariances, mathematically sound mechanisms are used, such as Bessel coefficients for B-CNNs. However, these mechanisms can only deal with an invariance that can be described with reasonable mathematical complexity. Yet, handling symmetries in a data-driven fashion (discovering useful symmetries during training, with vanilla CNNs or other more adapted methods like L-CNNs) is not a one-fits-all solution and some invariances may be impossible to learn without additional mechanisms. Thirdly, the way constraints are enforced in B-CNNs allows them to exhibit an invariance that can even not be present in the training data set. Hence, as shown in the above experiments, B-CNNs do not rely on data augmentation that increases the computational cost5, nor do they require to see training data with different orientations for the rotation invariance. This is a consequence of the mathematical soundness of our approach. Footnote 5: Data augmentation is considered as costly because it leads to (i) an increase of the number of training data, and/or (ii) an increase of the required number of training epochs. To conclude, automatic symmetry discovery and constraints-based models are two paradigms that should be used in different situations. While automatic symmetry discovery is useful when no prior knowledge is available and the invariance may be complex to describe mathematically, it relies on a appropriate learning of the invariances that may fail. On the other side, constraints-based models like our work require a prior knowledge of the invariances involved in the applications, but can provide strong guarantees. ## 9 Conclusion and Future Work This work provides a comprehensive explanation of B-CNNs, including their mathematical foundations and key findings. Improvements are presented and compared to the initial work of Delchevalerie et al. (2021), including making B-CNNs also equivariant to reflections and multi-scale. Furthermore, the previous troublesome meta-parameters \(m_{\max}\) and \(j_{\max}\) that were hard to fine-tune have been replaced with a single meta-parameter \(k_{\max}\) for which an optimal choice can be computed using the Nyquist frequency. An extensive empirical study has been conducted to assess the performance of B-CNNs compared to already existing techniques. One can conclude that B-CNNs have, most of the time, better performances than the other state-of-the-art methods, and achieve in the worst cases roughly the same performances. In low data settings, they actually outperform other models most of the time. This is mainly due to the B-CNNs ability to maintain robust invariances without resorting to data augmentation techniques, which is often not the case for other models. Finally, B-CNNs do not involve particular, more exotic (such as complex-valued feature maps), representations for feature maps and are therefore highly compatible with already existing deep learning techniques and frameworks. Regarding future work, it could be interesting to tailor B-CNNs for segmentation tasks, given their relevance in fields such as biomedical and satellite imaging. Such domains benefit greatly from rotation and reflection equivariant models, making B-CNNs a promising candidate for these tasks. Finally, a major actual concern in deep learning is the robustness regarding adversarial attacks or, more generally, small perturbations in the image. It could be interesting to evaluate if the use of Bessel coefficients and the \((S-)O(2)\) equivariant constraint make B-CNNs more robust to those specific perturbations or not. ## Acknowledgments and Disclosure of Funding The authors thank Jerome Fink and Pierre Poitier for their comments and the fruitful discussions on this paper. A.M. is funded by the Fund for Scientific Research (F.R.S.-FNRS) of Belgium. V.D. benefits from the support of the Walloon region with a Ph.D. grant from FRIA (F.R.S.-FNRS). A.B. is supported by a Fellowship of the Belgian American Educational Foundation. This research used resources of PTCI at UNamur, supported by the F.R.S.-FNRS under the convention n. 2.5020.11. ## Appendix A In this Appendix we prove that the Bessel basis described in Section 4.1 can be used as an orthonormal basis by carefully choosing \(k_{\nu,j}\). **Theorem 1**: _Let \(D^{2}\) be a circular domain of radius \(R\) in \(\mathbb{R}^{2}\). Let \(J_{\nu}\left(x\right)\) be the Bessel function of the first kind of order \(\nu\), and let \(k_{\nu,j}\) be defined such that \(J_{\nu}^{\prime}\left(k_{\nu,j}R\right)=0\), \(\forall\nu,j\in\mathbb{N}\). Then_ \[\left\{N_{\nu,j}J_{\nu}\left(k_{\nu,j}\rho\right)e^{i\nu\theta}\right\}\!,\ \text{where}\ N_{\nu,j}=1/\sqrt{2\pi\int_{0}^{R}\rho J_{\nu}^{2}\left(k_{\nu,j }\rho\right)d\rho}\] _is an orthonormal basis well-defined to express any squared-integrable functions \(f\) such that \(f:D^{2}\subset\mathbb{R}^{2}\longrightarrow\mathbb{R}\)._ **Proof** To prove this, we will use the fact that \[\int_{0}^{2\pi}e^{i\theta\left(\nu^{\prime}-\nu\right)}d\theta =\int_{0}^{2\pi}\cos\left(\theta\left(\nu^{\prime}-\nu\right) \right)+i\int_{0}^{2\pi}\sin\left(\theta\left(\nu^{\prime}-\nu\right)\right)\] \[=2\pi\delta_{\nu,\nu^{\prime}},\] (A.1) since \(\nu^{\prime}-\nu\) is always an integer in our use of Bessel functions. We use also Lommel's integrals, which are in our particular case \[\int_{0}^{R}\rho J_{\nu}\left(k_{\nu,j}\rho\right)J_{\nu}\left(k_ {\nu,j^{\prime}}\rho\right)d\rho=\] \[\left\{\frac{1}{k_{\nu,j^{\prime}}^{2}-k_{\nu,j}^{2}}\left[\rho \left(k_{\nu,j}J_{\nu}^{\prime}\left(k_{\nu,j}\rho\right)J_{\nu}\left(k_{\nu,j ^{\prime}}\rho\right)-k_{\nu,j^{\prime}}J_{\nu}^{\prime}\left(k_{\nu,j^{ \prime}}\rho\right)J_{\nu}\left(k_{\nu,j}\rho\right)\right)\right]_{0}^{R}\ \text{if}\ k_{\nu,j}\neq k_{\nu,j^{ \prime}},\] \[\left[\frac{\rho^{2}}{2}\left[J_{\nu}^{\prime 2}\left(k_{\nu,j} \rho\right)+\left(1-\frac{\nu^{2}}{k_{\nu,j}^{2}\rho^{2}}\right)J_{\nu}\left(k _{\nu,j}\rho\right)\right]\right]_{0}^{R}\text{otherwise}.\] (A.2) By taking into account that \(J_{\nu}^{\prime}\left(k_{\nu,j}R\right)=0\), Lommel's integrals lead to \[\int_{0}^{R}\rho J_{\nu}\left(k_{\nu,j}\rho\right)J_{\nu}\left(k _{\nu,j^{\prime}}\rho\right)d\rho =\left\{0\text{ if }k_{\nu,j}\neq k_{\nu,j^{\prime}}\right.\] \[=\left(\frac{R^{2}}{2}-\frac{\nu^{2}}{2k_{\nu,j}^{2}}\right)J_{ \nu}\left(k_{\nu,j}R\right)\delta_{j,j^{\prime}}.\] (A.3) Now, by using Equation (A.1) \[\int_{0}^{2\pi}\int_{0}^{R}\rho\left[N_{\nu,j}J_{\nu}\left(k_{\nu,j}\rho\right)e^{i\nu\theta}\right]^{*}\left[N_{\nu^{\prime},j^{\prime}}J_{ \nu^{\prime}}\left(k_{\nu^{\prime},j^{\prime}}\rho\right)e^{i\nu^{\prime} \theta}\right]d\theta d\rho\] \[= 2\pi\delta_{\nu,\nu^{\prime}}\int_{0}^{R}\rho N_{\nu,j}J_{\nu} \left(k_{\nu,j}\rho\right)N_{\nu,j^{\prime}}J_{\nu}\left(k_{\nu,j^{\prime}} \rho\right)d\rho,\] which, by using Equation (A.3), leads to \[2\pi\delta_{\nu,\nu^{\prime}}\int_{0}^{R}\rho N_{\nu,j}J_{\nu}\left( k_{\nu,j}\rho\right)N_{\nu,j^{\prime}}J_{\nu}\left(k_{\nu,j^{\prime}}\rho\right)d\rho\] \[= 2\pi N_{\nu,j}^{2}\left(\frac{R^{2}}{2}-\frac{\nu^{2}}{2k_{\nu,j} ^{2}}\right)J_{\nu}\left(k_{\nu,j}R\right)\delta_{\nu,\nu^{\prime}}\delta_{j,j ^{\prime}}.\] To conclude this proof, one can show by using Equation (A.3) again that \[N_{\nu,j}^{2} =\frac{1}{2\pi\int_{0}^{R}\rho J_{\nu}^{2}\left(k_{\nu,j}\rho \right)d\rho}\] \[=\frac{1}{2\pi\left(\frac{R^{2}}{2}-\frac{\nu^{2}}{2k_{\nu,j}^{2} }\right)J_{\nu}\left(k_{\nu,j}R\right)},\] and then finally, \[\int_{0}^{2\pi}\int_{0}^{R}\rho\left[N_{\nu,j}J_{\nu}\left(k_{\nu,j}\rho \right)e^{i\nu\theta}\right]^{*}\left[N_{\nu^{\prime},j^{\prime}}J_{\nu^{ \prime}}\left(k_{\nu^{\prime},j^{\prime}}\rho\right)e^{i\nu^{\prime}\theta} \right]d\theta d\rho=\delta_{\nu,\nu^{\prime}}\delta_{j,j^{\prime}},\] which is the definition of an orthonormal basis6. Footnote 6: Note that the proof for \(k_{\nu,j}\) defined by \(J_{\nu}\left(k_{\nu,j}R\right)=0\) is now straightforward since it only sweeps the non-zero term in Equation (A.2). The following remains the same. ## Appendix B In this Appendix we prove the properties that link \(\varphi_{\nu,j}\) and \(\varphi_{-\nu,j}\). **Theorem 2**: _Let \(\varphi_{\nu,j},\forall\nu,j\in\mathbb{N}\) be the Bessel coefficients of a particular function \(\Psi\left(\rho,\theta\right):D^{2}\subset\mathbb{R}^{2}\longrightarrow\mathbb{R}\) defined on a circular domain of radius \(R\), that is,_ \[\varphi_{\nu,j}=\int_{0}^{2\pi}\int_{0}^{R}\rho\left[N_{\nu,j}J_{\nu}\left(k_ {\nu,j}\rho\right)e^{i\nu\theta}\right]^{*}\Psi\left(\rho,\theta\right)d \theta d\rho.\] _Then, these coefficients are not all independent. They are linked by the relations_ \[\varphi_{-\nu,j}=\left(-1\right)^{\nu}\varphi_{\nu,j}^{*}\Longleftrightarrow \begin{cases}\Re\left(\varphi_{-\nu,j}\right)=\left(-1\right)^{\nu}\Re\left( \varphi_{\nu,j}\right)\\ \Im\left(\varphi_{-\nu,j}\right)=\left(-1\right)^{\nu+1}\Im\left(\varphi_{ \nu,j}\right).\end{cases}\] **Proof** To prove this, we will use different properties of the Bessel functions. Firstly, \[J_{-\nu}\left(x\right)=\left(-1\right)^{\nu}J_{\nu}\left(x\right),\] and secondly, \[J_{\nu}^{\prime}\left(x\right)=\frac{1}{2}\left(J_{\nu-1}\left(x\right)-J_{ \nu+1}\left(x\right)\right).\] Then, by using these two relations, one can show that \[J^{\prime}_{-\nu}\left(x\right) =\frac{1}{2}\left(J_{-\nu-1}\left(x\right)-J_{-\nu+1}\left(x\right)\right)\] \[=\frac{\left(-1\right)^{\nu+1}}{2}\left(J_{\nu+1}\left(x\right)-J _{\nu-1}\left(x\right)\right)\] \[=\left(-1\right)^{\nu}J^{\prime}_{\nu}\left(x\right)\] (B.1) However, if \(k_{-\nu,j}\) is such that \(J^{\prime}_{-\nu}\left(k_{-\nu,j}R\right)=0\), it also leads thanks to Equation (B.1) to \(J^{\prime}_{\nu}\left(k_{-\nu,j}R\right)=0\). And then, the only possibility is that \(k_{-\nu,j}=k_{\nu,j}\) (because we still have \(J^{\prime}_{\nu}\left(k_{\nu,j}R\right)=0\)). Now, regarding the normalization factor, \[N_{-\nu,j}^{-1} =\sqrt{2\pi\int_{0}^{R}\rho J_{-\nu}^{2}\left(k_{-\nu,j}\rho \right)d\rho}\] \[=\sqrt{2\pi\int_{0}^{R}\rho J_{\nu}^{2}\left(k_{\nu,j}\rho\right) \left(-1\right)^{2\nu}d\rho}\] \[=N_{\nu,j}^{-1}.\] One can now put all this together to show that \[\varphi_{-\nu,j} =\int_{0}^{2\pi}\int_{0}^{R}\rho\left[N_{-\nu,j}J_{-\nu}\left(k_{ -\nu,j}\rho\right)e^{-i\nu\theta}\right]^{*}\Psi\left(\rho,\theta\right)d \theta d\rho\] \[=\left(-1\right)^{\nu}\int_{0}^{2\pi}\int_{0}^{R}\rho\left[N_{\nu,j}J_{\nu}\left(k_{\nu,j}\rho\right)e^{-i\nu\theta}\right]^{*}\Psi\left(\rho, \theta\right)d\theta d\rho\] \[=\left(-1\right)^{\nu}\varphi_{\nu,j}^{*},\] which leads to the end of the proof. ## Appendix C In this Appendix, we prove that the rotation invariant operation described in Section 5.1 is pseudo-injective. It means that results will be different if images are different. Pseudo makes reference to the exception when an image is compared to a rotated version of itself. **Theorem 3**: _Let \(\left\{\varphi_{\nu,j}\right\}\) be the Bessel coefficients of a particular function \(\Psi\left(\rho,\theta\right):D^{2}\subset\mathbb{R}^{2}\longrightarrow\mathbb{ R}\) defined on a circular domain of radius \(R\). Let \(\left\{\varphi_{\nu,j}^{\prime}\right\}\) be the Bessel coefficients of another particular function \(\Psi^{\prime}\left(\rho,\theta\right)\) defined on the same domain \(D^{2}.\) Finally, let \(\left\{\kappa_{\nu,j}\right\}\) be some arbitrary complex numbers. Then,_ \[\sum_{\nu}\big{|}\sum_{j}\kappa_{\nu,j}^{*}\varphi_{\nu,j}\big{|}^{2}=\sum_{ \nu}\big{|}\sum_{j}\kappa_{\nu,j}^{*}\varphi_{\nu,j}^{\prime}\big{|}^{2} \Rightarrow\exists\alpha:\Psi\left(\rho,\theta\right)=\Psi^{\prime}\left(\rho,\theta-\alpha\right),\forall\rho,\forall\theta.\] Proof.To make developments easier, one can use the bra-ket notation commonly used in quantum mechanics to denote quantum states. In this notation, \(\big{|}v\big{\rangle}\) is called a _ket_ and denotes a vector in an abstract complex vector space, and \(\langle v|\) is called a _bra_ and corresponds to the same vector but in the dual vector space. It follows that \(\big{|}v\big{\rangle}^{\dagger}=\langle v\big{|}\), and the inner-product between two vectors is conveniently expressed by \(\langle v|f\rangle\), and the outer-product by \(\big{|}f\rangle\langle v\big{|}\). By using the fact that \(\big{|}z\big{|}^{2}=zz^{*}\) and the bra-ket notation, \[\sum_{\nu}\big{|}\sum_{j}\kappa_{\nu,j}^{*}\varphi_{\nu,j}\big{|}^{2}=\sum_{ \nu}\big{|}\sum_{j}\kappa_{\nu,j}^{*}\varphi_{\nu,j}^{\prime}\big{|}^{2}\] leads to \[\sum_{\nu}\langle\kappa_{\nu}\big{|}\varphi_{\nu}\rangle\langle\kappa_{\nu} \big{|}\varphi_{\nu}\rangle^{*}=\sum_{\nu}\langle\kappa_{\nu}\big{|}\varphi_{ \nu}^{\prime}\rangle\langle\kappa_{\nu}\big{|}\varphi_{\nu}^{\prime}\rangle^ {*},\] where \(\kappa_{\nu}\) (resp., \(\varphi_{\nu}\)) is a vector that contains all the different values \(\kappa_{\nu,j}\) (resp., \(\varphi_{\nu,j}\)) for this particular \(\nu\). This Equation can further be written \[\sum_{\nu}\langle\kappa_{\nu}\big{|}\varphi_{\nu}\rangle\langle\varphi_{\nu} \big{|}\kappa_{\nu}\rangle=\sum_{\nu}\langle\kappa_{\nu}\big{|}\varphi_{\nu}^ {\prime}\rangle\langle\varphi_{\nu}^{\prime}\big{|}\kappa_{\nu}\rangle.\] (C.1) However, since the \(\kappa_{\nu,j}\)'s are totally arbitrary, the only possibility to satisfy Equation (C.1) is that \[\sum_{\nu}\big{|}\varphi_{\nu}\rangle\langle\varphi_{\nu}\big{|}=\sum_{\nu} \big{|}\varphi_{\nu}^{\prime}\rangle\langle\varphi_{\nu}^{\prime}\big{|}.\] In quantum mechanics, \(\big{|}\varphi_{\nu}\rangle\langle\varphi_{\nu}\big{|}\) is called the density matrix of \(\varphi_{\nu}\), and it is known that the only way to achieve identical density matrices for different states \(\varphi_{\nu}\) and \(\varphi_{\nu}^{\prime}\) is that they should only differ by a phase factor7. \(\blacksquare\) Footnote 7: Indeed, if \(\varphi_{\nu}=\varphi_{\nu}^{\prime}e^{i\nu\alpha}\), then \(\big{|}\varphi_{\nu}^{\prime}\rangle\langle\varphi_{\nu}^{\prime}\big{|}=|e^ {i\nu\alpha}\varphi_{\nu}\rangle\langle e^{i\nu\alpha}\varphi_{\nu}|=e^{i\nu \alpha}e^{-i\nu\alpha}\big{|}\varphi_{\nu}\rangle\langle\varphi_{\nu}\big{|}= \big{|}\varphi_{\nu}\rangle\langle\varphi_{\nu}\big{|}\)
2307.04937
Towards Fair Graph Neural Networks via Graph Counterfactual
Graph neural networks have shown great ability in representation (GNNs) learning on graphs, facilitating various tasks. Despite their great performance in modeling graphs, recent works show that GNNs tend to inherit and amplify the bias from training data, causing concerns of the adoption of GNNs in high-stake scenarios. Hence, many efforts have been taken for fairness-aware GNNs. However, most existing fair GNNs learn fair node representations by adopting statistical fairness notions, which may fail to alleviate bias in the presence of statistical anomalies. Motivated by causal theory, there are several attempts utilizing graph counterfactual fairness to mitigate root causes of unfairness. However, these methods suffer from non-realistic counterfactuals obtained by perturbation or generation. In this paper, we take a causal view on fair graph learning problem. Guided by the casual analysis, we propose a novel framework CAF, which can select counterfactuals from training data to avoid non-realistic counterfactuals and adopt selected counterfactuals to learn fair node representations for node classification task. Extensive experiments on synthetic and real-world datasets show the effectiveness of CAF. Our code is available at https://github.com/TimeLovercc/CAF-GNN.
Zhimeng Guo, Jialiang Li, Teng Xiao, Yao Ma, Suhang Wang
2023-07-10T23:28:03Z
http://arxiv.org/abs/2307.04937v2
# Towards Fair Graph Neural Networks via Graph Counterfactual ###### Abstract. Graph neural networks have shown great ability in representation (GNNs) learning on graphs, facilitating various tasks. Despite their great performance in modeling graphs, recent works show that GNNs tend to inherit and amplify the bias from training data, causing concerns of the adoption of GNNs in high-stake scenarios. Hence, many efforts have been taken for fairness-aware GNNs. However, most existing fair GNNs learn fair node representations by adopting statistical fairness notions, which may fail to alleviate bias in the presence of statistical anomalies. Motivated by causal theory, there are several attempts utilizing graph counterfactual fairness to mitigate root causes of unfairness. However, these methods suffer from non-realistic counterfactuals obtained by perturbation or generation. In this paper, we take a causal view on fair graph learning problem. Guided by the casual analysis, we propose a novel framework CAF, which can select counterfactuals from training data to avoid non-realistic counterfactuals and adopt selected counterfactuals to learn fair node representations for node classification task. Extensive experiments on synthetic and real-world datasets show the effectiveness of CAF. Our code is available at [https://github.com/TimeLoverce/CAF-GNN](https://github.com/TimeLoverce/CAF-GNN). Graph neural networks; Counterfactual fairness; Causal learning + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + Footnote †: ccs: Computing methodologies Machine learning. + + Footnote †: ccs: Computing methodologies Machine learning. attributes or generate counterfactuals with GraphVAE, which can easily result in non-realistic counterfactuals. Such non-realistic counterfactuals may disrupt the underlying latent semantic structure, thereby potentially undermining the model's performance. This is because simply flipping sensitive attributes cannot model the influence on other features or graph structure causally caused by sensitive attributes (Beng et al., 2015), and the generative approach lacks supervision of real counterfactuals and could be over-complicated (Zhu et al., 2017). Motivated by the discussion above, in this paper, we investigate whether one can obtain counterfactuals within the training data. For example, if a female applicant was rejected by a college, we aim to find another male applicant who has a similar background as the counterfactual applicant. Thus, we can get realistic counterfactuals and avoid the ill-supervised generation process. To achieve our goal, we are faced with several challenges: (i) Graph data is quite complex, thus it is infeasible to directly find counterfactuals in the original data space. Besides, some guidance or rules are needed to find the counterfactuals. (ii) To achieve graph counterfactual fairness, learned representation should be invariant to sensitive attributes and information causally influenced by sensitive attributes. It is critical to design proper supervision to help models get rid of sensitive information. To tackle the aforementioned challenges, we propose a casual view of the graph, label and sensitive attribute. The causal interpretation guides us to find counterfactuals and learn disentangled representations, where the disentangled content representations are informative to the labels and invariant to the sensitive attributes. Guided by the causal analysis, we propose a novel framework, Counterfactual **A**ugmented Fair GNN (CAF), to simultaneously learn fair node representations for graph counterfactual fairness and keep the performance on node classification tasks. Specifically, based on the causal interpretation, we derive several constraints to enforce the learned representations being invariant across different sensitive attributes. To obtain proper counterfactuals to guide representation learning, we utilize labels and sensitive attributes as guidance to filter out potential counterfactuals in representation space. Our main contributions are: * We provide a causal formulation of the fair graph learning process and fair node representation learning task. * We propose a novel framework CAF to learn node representations for graph counterfactual fairness. Specifically, we find counterfactuals in representation space and design novel constraints to learn the content representations. * We conduct extensive experiments on real-world datasets and synthetic dataset to show the effectiveness of our model. ## 2. Related Works **Graph Neural Networks.** Graph neural networks (GNNs) have dominated various tasks on graph-structured data, such as node classification (Beng et al., 2015; Liu et al., 2016; Wang et al., 2017; Wang et al., 2018; Wang et al., 2018), graph classification (Wang et al., 2018) and link prediction (Wang et al., 2018; Wang et al., 2018). Existing GNNs can be categorized into spatial-based GNNs and spectral-based GNNs. Spatial-based GNNs leverage the graph structure directly, focusing on the relationships between nodes and their immediate neighbors to inform feature learning. On the other hand, spectral-based GNNs operate in the spectral domain defined by the graph Laplacian and its eigenvectors, making them better suited to capture global properties of the graph. The superior performance of GNNs has greatly extended their application scenarios (Liu et al., 2016). For example, banks may leverage GNNs to process transaction networks to detect the abnormal behavior of users (Beng et al., 2015). The applications in critical decision-making systems place higher requirements for GNNs, such as being fair and interpretable (Wang et al., 2018). Despite their extensive utility and efficacy, recent studies (Beng et al., 2015; Liu et al., 2016; Wang et al., 2018) show that GNNs can harbor implicit biases on different groups, which can lead to skewed or unfair outcomes. This bias issue is particularly critical when GNNs are deployed in high-stake scenarios, making it necessary to ensure fairness in the modeling process (Zhu et al., 2017). Thus, mitigating bias and promoting fairness in GNNs are active and necessary research areas (Beng et al., 2015). The source of bias in Graph Neural Networks (GNNs) primarily originates from two areas. First, it comes from the inherent bias in the input data, which may contain unequal representation or prejudiced information about nodes or connections in the graph. Second, the bias can stem from the algorithmic design of the GNN itself, which may unintentionally emphasize certain features or connections over others during the learning process. Therefore, there is a trend for the research community to design fairer GNN models to deal with graph-based tasks (Beng et al., 2015; Liu et al., 2016; Wang et al., 2018; Wang et al., 2018). **Fairness in GNNs.** Fairness is a widely-existed issue of machine learning systems (Zhu et al., 2017; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018). Researchers evaluate the fairness of models with many kinds of fairness notions, including group fairness (Liu et al., 2016; Wang et al., 2018), individual fairness (Wang et al., 2018) and counterfactual fairness (Wang et al., 2018). The metrics can also be used to measure the fairness performance of Graph Neural Networks (Beng et al., 2015; Liu et al., 2016). The commonly used fairness notions in GNNs are statistical parity (Wang et al., 2018) and equal opportunity (Liu et al., 2016). FairGNN (Beng et al., 2015) utilizes adversarial training to establish fairness in graph-based models, refining its representation through an adversary tasked with predicting sensitive attributes. EDITS (Beng et al., 2015), on the other hand, is a pre-processing technique that focuses on ensuring fairness in graph learning. It aims to eliminate sensitive information from the graph data by correcting any inherent biases present within the input network. However, these methods and their metrics are developed based on correlation (Zhu et al., 2017), which has been found to be unable to deal with statistical anomalies, such as Simpson's paradox (Simpson, 1966). Based on the causal theory, counterfactual fairness can model the causal relationships and gets rid of the correlation-induced abnormal behavior (Wang et al., 2018; Wang et al., 2018). There is an increasing interest to apply counterfactual fairness on graphs to design fairer GNNs (Beng et al., 2015; Liu et al., 2016). NIFTY (Beng et al., 2015) perturbs sensitive attributes for each node to obtain counterfactuals and omits the causal relationships among variables. GEAR (Zhu et al., 2017) uses GraphVAE (Nissim et al., 2018) to generate the graph structure and node features causally caused by the sensitive attributes. For more details about counterfactual learning on graphs, please refer to the survey (Beng et al., 2015). Our paper is inherently different from existing work: (i) Unlike existing works that might generate unrealistic counterfactuals, our work avoids the generation process and selects counterfactuals with sensitive attributes and labels as guidance; and (ii) We propose a causal view to understand the source of bias. Based on the causal interpretation, we also design several constraints to help our model learn the fair node representations. ## 3. Preliminaries In this section, we start by introducing the necessary notation and defining the problem at hand. Following this, we employ the Structural Causal Model to frame the issue, which will then motivate our solution - the disentangled fair representation learning method. ### Notations and Problem Definition Throughout the paper, we use italicized uppercase letters to represent random variables (e.g., \(S,E\)) and use italicized lowercase letters to denote the specific value of scalars (e.g., \(s\), \(y_{i}\)). Non-italicized bold lowercase and uppercase letters are used to denote specific values of vectors (e.g., \(\mathbf{x}_{i}\)) and matrices (e.g., X), respectively. Let \(\mathcal{G}=(\mathcal{V},\mathcal{E},\mathbf{X})\) denote an attributed graph, where \(\mathcal{V}=\{v_{1},...,v_{N}\}\) is the set of \(N\) nodes, \(\mathcal{E}\subseteq\mathcal{V}\times\mathcal{V}\) is the set of edges, \(\mathbf{X}\in\mathbb{R}^{N\times D}\) is the node attribute matrix. The \(i\)-th row of \(\mathbf{X}\), i.e., \(\mathbf{x}_{i}\) is the feature vector of node \(v_{i}\). \(\mathbf{A}\in\{0,1\}^{N\times N}\) is the adjacency matrix of the graph \(\mathcal{G}\), where \(\mathbf{A}_{ij}=1\) if nodes \(v_{i}\) and \(v_{j}\) are connected; otherwise \(\mathbf{A}_{ij}=0\). We use \(\mathbf{s}\in\{0,1\}^{N\times 1}\) to denote the sensitive attributes, where \(s_{i}\) is the sensitive attribute of \(v_{i}\). Following (Kang et al., 2017), we only consider binary sensitive attributes and leave the extension of multi-category sensitive attributes as future work. We use \(\mathbf{y}\in\{1,...,c\}^{N\times 1}\) to denote the ground-truth node labels, where \(y_{i}\) is the label of \(v_{i}\). In this paper, we assume that both target labels and sensitive attributes are binary variables for convenience. For the semi-supervised node classification task, only part of nodes \(\mathcal{V}_{L}\in\mathcal{V}\) are labeled for training and the remaining nodes \(\mathcal{V}_{U}=\mathcal{V}\backslash\mathcal{V}_{L}\) are unlabeled. The goal is to train a classifier \(f\) to predict the labels of unlabeled nodes, which has satisfied node classification performance and fairness performance simultaneously. Given \(\mathbf{X},\mathbf{A}\) and \(\mathbf{Y}_{L}\), the goal of semi-supervised node classification is to learn a mapping function \(f\) to predict the labels of unlabeled nodes, i.e., \(f:(\mathbf{A},\mathbf{X})\rightarrow\mathcal{Y}_{U}\), where \(\mathcal{Y}_{U}\) the set of predicted labels of unlabeled nodes \(\mathcal{V}_{U}\). ### The Desiderata for Fair Graph Learning GNNs have shown remarkable capabilities in the realm of semi-supervised node classification. However, they are not immune to bias issues, primarily stemming from imbalanced or prejudiced input data, and potentially from the structural design of the GNNs themselves, which may inadvertently prioritize certain features or connections. Therefore, substantial efforts have been directed towards developing fairness-aware methodologies within GNNs. The majority of these methods strive to ensure correlation-based fairness notions, such as demographic parity or equality of opportunity. However, these correlation-based fairness notions can be inherently flawed, particularly in the presence of statistical anomalies, which calls for more nuanced and robust approaches to achieve fairness in GNNs. Recent advance (Kang et al., 2017) shows that causal-based fairness notions can help resolve this issue. Thus, to help design a fair GNN classifier, we take a deep causal look under the observed graph. Without loss of generality, in this work, we focus on the node classification task and construct a Structural Causal Model (Zhu et al., 2018) in Figure 1. It presents the causal relationships among five variables: sensitive attribute \(S\), ground-truth label \(Y\), environment feature \(E\), content feature \(C\) and ego-graph \(G\) for each node. Each link denotes a deterministic causal relationship between two variables. We list the following explanations for the SCM: * \(S\to E\). The variable \(E\) denotes latent environment features that are determined by the sensitive attribute \(S\). For example, people of different genders will have different heights or other physical characteristics, where \(S\) is the sensitive attribute of genders and \(E\) is physical characteristics that are causally determined by the sensitive attribute. This relationship will lead to bias in latent feature space, which we will explain shortly. * \(C\to Y\). The variable \(C\) denotes the content feature that determines ground-truth label \(Y\). Taking the credit scoring as an example, ideally, we assign credit scores using personal information not related to the sensitive attribute, i.e., we use content feature \(C\) instead of \(E\) to assign credit score \(Y\). * \(E\to G\gets C\). The ego-graph \(G\) is determined by the content feature \(C\) and the environment feature \(E\), which are two disjoint parts. \(E\) and \(C\) are latent features and \(G\) is the observed ego-graph. Considering a one-hop ego-graph, it contains the social connections of center node and the observed feature of center node. The causal relationship indicates environment feature \(E\) and content feature \(C\) can determine one's social connections and personal features (node attributes). The SCM paves us a way to understand the source of bias and how to design a fair GNN classifier. Next, we give details about source of bias and disentangled learning. Our objective is to approximate the content feature \(C\) with a content representation denoted as \(\hat{C}\), and similarly, approximate the environment feature \(E\) with an environment representation denoted as \(\hat{E}\). To streamline our discussion, we will slightly abuse notation by also employing the symbols \(C\) and \(E\) to signify the corresponding content and environment representations throughout the remainder of the paper. #### 3.2.1. Source of Bias From the causal graph, we can observe that the sensitive variable \(S\) and the label variable \(Y\) are independent with each other, i.e., the only path from \(S\) to \(Y\), \(S\to E\to G\gets C\gets Y\) is blocked by the collider \(G\). However, it is worthy noting that \(S\) and \(Y\) are **dependent conditioned on \(G\)**, i.e., \[P(Y,S|G)\neq P(Y|G)P(S|G). \tag{1}\] The conditional dependency of \(Y\) and \(S\) on \(G\) is one major reason that leads to biased prediction. If we directly learn a GNN model that aims to predict \(Y\) based on \(G\), as \(Y\) and \(S\) are dependent given \(G\), the learned label \(Y\) will have correlation with \(S\), resulting in the biased prediction on sensitive attribute \(S\). Alternatively, we can understand the bias by treating existing GNNs as composed of a feature extractor \(g\) and a classifier \(c\). The feature extractor \(g\) takes the subgraph centered at a node as input and learns node representation as \(\mathbf{z}=g(G)\). Then the classifier Figure 1. Structural Causal Model for model prediction. We use white color to denote latent variables and use gray color to denote the observed variables. \(c\) uses the representation \(\mathbf{z}\) to predict the label as \(\hat{y}=c(\mathbf{z})\). As \(G\) is dependent on \(E\) and \(C\), the learned representation \(\mathbf{z}\) is likely to contain mixed information of both \(E\) and \(C\). Hence, the predicted label \(\hat{y}\) is also likely to have correlation with \(S\). #### 3.2.2. Disentangled Fair Representation Learning From the above analysis motivates, we can observe that in order to have fair prediction, we need to learn disentangled representation \(E\) and \(C\) to block the path from \(S\) to \(Y\) conditioned on \(G\), and only use the content information \(C\) to predict \(Y\), i.e., \(P(Y|C)\). As \(C\) determines \(Y\), it contains all the label information to predict \(Y\). Meanwhile, observing \(E\) and \(C\) can block the conditional path from \(S\) to \(Y\), i.e., \(P(Y,S|E,C,G)=P(Y|C,E,G)P(S|C,E,G)\). Note that observing \(C\) blocks the path from \(E\) to \(Y\) and the path from \(G\) to \(Y\). Hence, we have \(P(Y|C,E,G)=P(Y|C)\). Observing \(E\) blocks the path from \(S\) to \(G\) and the path from \(S\) to \(C\), thus, we have \(P(S|C,E,G)=P(S|E)\). This gives us \[P(Y,S|E,C,G)=P(Y|C)P(S|E). \tag{2}\] The above equation shows that observing \(E\) and \(C\) would make \(Y\) and \(S\) independent and \(P(Y|C)\) is unbiased. Hence, if we can learn disentangled latent representation \(E\) and \(C\), we would be able to use \(C\) for fair classification. However, the main challenge is we do not have ground-truth \(E\) and \(C\) to help us train a model that can learn disentangled representation. With a slight abuse of notation, we also use \(C\) to denote the learned content representation and use \(E\) to denote the learned environment representation. Fortunately, we can use the SCM to derive several properties of the optimal representation, which would be used to help learn the latent representation of \(C\) and \(E\): * _Invariance:_\(C\perp\!\!\!\perp E\). This property can be understood in two perspectives. That is, the content representations should be independent to the sensitive attributes and the environment representation induced by the sensitive attribute. Meanwhile, the environment representations should be independent to the labels and the content representation which is informative to the labels. * _Sufficiency:_\((C,E)\to G\). The combined representation can used to reconstruct the observed graph. * _Informativeness:_\(C\to Y\). The content representations should have the capacity to give accurate predictions of labels \(Y\). ## 4. Methodology The causal view suggests us to learn disentangled representation \(\mathbf{c}\) and \(\mathbf{e}\) for node \(v\), with \(\mathbf{c}\) capturing the content information that is useful for label prediction and irrelevant to sensitive attributes, and \(\mathbf{e}\) capturing the environment information depends on sensitive attribute only. With the disentanglement, \(\mathbf{c}\) can be used to give fair predictions. However, how to effectively disentangle \(\mathbf{c}\) and \(\mathbf{e}\) remains a question given that we do not have ground-truth of disentangled representation. Intuitively, for a node \(v\) with sensitive attribute \(s\), its content representation \(\mathbf{c}\) should remain the same when the sensitive attribute is flipped to \(1-s\) while its environment representation \(\mathbf{e}\) should change correspondingly. Hence, if we know the counterfactual of node \(v\), we will be able to utilize the counterfactual to help learn disentangled representation for fair classification; while the counterfactual is not observed. To address the challenges, we propose a novel framework CAF as shown in Figure 2 (a), which is composed of: (i) a GNN encoder that takes ego-graph \(\mathcal{G}\) of node \(v\) to learn disentangled representation \(\mathbf{c}\) and \(\mathbf{e}\); (ii) the counterfactual augmentation module, which aims to discover counterfactual for each factual observation and utilize the counterfactual to help learn disentangled representation; (iii) a fair classifier which takes \(\mathbf{c}\) as input for fair classification. Next, we give the details of each component. ### Disentangled Representation Learning For each node \(v_{i}\), the content representation \(\mathbf{c}_{i}\) should capture the important node attribute and neighborhood information for predicting the label while the environment representation \(\mathbf{e}_{i}\) should capture all important information relevant to sensitive attribute. As GNNs have shown great ability in modeling graph structured data, we adopt GNNs to learn \(\mathbf{c}_{i}\) and \(\mathbf{e}_{i}\). Instead of adopting two GNNs to learn \(\mathbf{c}_{i}\) and \(\mathbf{e}_{i}\) separately, to reduce the number of parameters, we adopt one GNN to learn \(\mathbf{c}_{i}\) and \(\mathbf{e}_{i}\). We empirically found that using two GNNs and one GNN have similar performance due to constraints we designed to disentangle \(\mathbf{c}_{i}\) and \(\mathbf{e}_{i}\), which will be introduced later. Specifically, the GNN \(f_{\theta}\) parameterized by \(\theta\) takes \(\mathcal{G}\) as input and learns representation as: \[[\mathbf{C},\mathbf{E}]=\mathbf{H}=f_{\theta}(\mathbf{A},\mathbf{X}), \tag{3}\] where \(\mathbf{H}\in\mathbb{R}^{N\times d}\) is the learned representation matrix with the \(i\)-th row, i.e., \(\mathbf{h}_{i}\), as the representation of node \(v_{i}\). We treat the first \(d_{\mathbf{c}}\) columns as the content representation matrix \(\mathbf{C}\) and use the next \(d_{\mathbf{c}}\) columns as the environment representation matrix \(\mathbf{E}\). Note that \(d=d_{\mathbf{c}}+d_{\mathbf{e}}\). In our implementation, we set \(d_{\mathbf{c}}=d_{\mathbf{c}}\). \(\mathbf{C}\in\mathbb{R}^{N\times d_{\mathbf{c}}}\) is the content representation matrix with the \(i\)-th row, i.e., \(\mathbf{c}_{i}\), as the content representation of node \(v_{i}\). Similarly, \(\mathbf{E}\in\mathbb{R}^{N\times d_{\mathbf{c}}}\) is the environment representation matrix with the \(i\)-th row, i.e., \(\mathbf{c}_{i}\) as the environment representation matrix with the \(i\)-th row, i.e., \(\mathbf{c}_{i}\) as the environment representation of node \(v_{i}\). \(f_{\theta}\) is flexible to be various GNNs such as GCN (Gori et al., 2017) and GraphSAGE (Gori et al., 2017). To make sure \(\mathbf{c}_{i}\) captures the content information for fair label prediction, and \(\mathbf{e}_{i}\) and \(\mathbf{c}_{i}\) are disentangled, based on the causal analysis in Section 3, we add following constraints: **Informativeness Constraint.** First, the content representation \(\mathbf{c}_{i}\) should be informative to the downstream tasks, i.e., \(C\to Y\). Hence, for node \(v_{i}\), we should be able to get accurate label prediction from \(\mathbf{c}_{i}\). Thus, we introduce a classifier \(f_{\phi}\) with model parameter \(\phi\). It takes \(\mathbf{c}_{i}\) as input and predicts the class distribution of \(v_{i}\) as: \[\hat{\mathbf{y}}_{i}=f_{\phi}(\mathbf{c}_{i}). \tag{4}\] The loss function for training the classifier is given as: \[\mathcal{L}_{\text{pred}}=\frac{1}{|\mathcal{V}_{L}|}\sum_{v_{i}\in\mathcal{V}_ {L}}\ell(\hat{\mathbf{y}}_{i},\mathbf{y}_{i}), \tag{5}\] where \(\mathbf{y}_{i}\) is the one-hot encoding of ground-truth label of \(v_{i}\). \(\ell(\hat{\mathbf{y}}_{i},\mathbf{y}_{i})\) denotes the cross entropy between \(\hat{\mathbf{y}}_{i}\) and \(\mathbf{y}_{i}\). **Sufficiency Constraint.** As shown in our causal view, the representation (\(\mathbf{c}_{i}\) and \(\mathbf{e}_{i}\)) should be sufficient to reconstruct the observed factual graph \(\mathcal{G}_{i}\). In disentangled representation learning research, the reconstruction supervision is usually adopted to guide the learning process (Gori et al., 2017; Zhang et al., 2018). However, existing graph counterfactual fairness approaches (Gori et al., 2017; Zhang et al., 2018) fail to provide supervision to preserve graph information in the representations. Thus, they put their models at a risk of being stuck in trivial solutions to merely get spurious information in the representations, which contradicts the SCM and is not sufficient to reconstruct the observed graph \(\mathbf{\mathcal{G}}_{i}\). In our model, we formalize the sufficiency constraint as a reconstruction of the graph structure. Specifically, for a pair of nodes \((v_{i},v_{j})\), we predict the link existence probability as \(p_{ij}=\sigma(\mathbf{h}_{i}\mathbf{h}_{j}^{T})\), where \(\mathbf{h}_{i}=[\mathbf{c}_{i},\mathbf{c}_{i}]\) is the node representation of node \(v_{i}\). The sufficiency constraint is \[\mathcal{L}_{\text{suf}}=\frac{1}{|\mathcal{E}|+|\mathcal{E}^{-}|}\sum_{(v_{i },v_{j})\in\mathcal{E}\cup\mathcal{E}^{-}}-e_{ij}\log p_{ij}-(1-e_{ij})\log p_{ ij}, \tag{6}\] where \(\mathcal{E}^{-}\) is the set of sampled negative edges. \(e_{ij}=1\) if node \(v_{i}\) and \(v_{j}\) are connected; otherwise \(e_{ij}=0\). **Orthogonal Constraint.** The above model can help to learn \(\mathbf{c}_{i}\) that captures graph information for label prediction, however, it doesn't guarantee that \(\mathbf{c}_{i}\) doesn't contain sensitive attribute information. To make sure that \(\mathbf{c}_{i}\) and \(\mathbf{c}_{i}\) are disentangled, i.e., \(\mathbf{c}_{i}\) doesn't contain any environment information relevant to sensitive attribute, we further impose the orthogonal constraint, i.e., \(\mathbf{c}_{i}^{T}\mathbf{e}_{i}=0\). ### Counterfactual Augmented Learning As we do not have ground-truth of \(\mathbf{c}_{i}\) and \(\mathbf{c}_{i}\), we used several constraints to implicitly superoise the learning of \(\mathbf{c}_{i}\) and \(\mathbf{c}_{i}\). To fully learn disentangled \(\mathbf{c}_{i}\) and \(\mathbf{c}_{i}\), we propose to learn better \(\mathbf{c}_{i}\) and \(\mathbf{c}_{i}\) that follows the counterfactual constraints. As shown in Figure 2 (b), generally, for a node \(v_{i}\) with observe the factual sensitive attribute \(s_{i}\) and label \(y_{i}\), its content representation \(\mathbf{c}_{i}\) should remain similar when the sensitive attribute is flipped to \(1-s_{i}\) but its environment representation \(\mathbf{c}_{i}\) should change correspondingly, which forms the counterfactual subgraph \(\mathbf{\mathcal{G}}_{i}^{e}\). Similarly, when flip label \(y_{i}\) but keep the sensitive attribute \(s_{i}\) unchanged, then \(v_{i}\)'s environment representation \(\mathbf{c}_{i}\) remain the same, while its content representation should change accordingly, leading to the counterfactual subgraph \(\mathbf{\mathcal{G}}_{i}^{c}\). Thus, if we know \(\mathbf{\mathcal{G}}_{i}^{e}\) and \(\mathbf{\mathcal{G}}_{i}^{c}\), we would be able to use these counterfactual graphs together with factual graph \(\mathbf{\mathcal{G}}_{i}\) to guide the learning of \(\mathbf{c}_{i}\) and \(\mathbf{c}_{i}\). However, in real-world, we can only observe factual graphs. To solve this challenge, we propose to find potential candidate counterfactuals with the observed factual graphs. The sensitive attribute and label are used to find counterfactuals in our model. Considering the fair credit scoring problem, when someone was assigned a low score, straightforward thinking is to know the results of people who have a similar background to her but of a different gender. For example, Sarah, a female, got a low credit score. Then she may ask, what if I were a male, what will my credit score be? This thinking inspires us to directly find counterfactuals from the observed node samples instead of performing perturbation or generating (Beng et al., 2017; Chen et al., 2018). The advantages of selecting counterfactuals from the observed node samples are twofold: (1) It avoids making assumptions about the graph generation process with sensitive attributes. (2) It does not need additional supervision signal. Another problem comes: selecting counterfactuals from the original data space is also challenging due to the complexity of graph distance calculation. To get counterfactual \(\mathbf{\mathcal{G}}_{i}^{e}\), we need to find some nodes which have different sensitive attribute and the same label. Similarly, we find some nodes with the same sensitive attribute and different labels as counterfactual \(\mathbf{\mathcal{G}}_{i}^{c}\). The task can be formalized as: \[\mathbf{\mathcal{G}}_{i}^{c} =\operatorname*{arg\,min}_{\mathbf{\mathcal{G}}_{i}\in\mathbb{G}}\{m( \mathbf{\mathcal{G}}_{i},\mathbf{\mathcal{G}}_{j})\mid y_{i}\neq y_{j},s_{i}=s_{j}\}, \tag{8}\] \[\mathbf{\mathcal{G}}_{i}^{e} =\operatorname*{arg\,min}_{\mathbf{\mathcal{G}}_{j}\in\mathbb{G}}\{m( \mathbf{\mathcal{G}}_{i},\mathbf{\mathcal{G}}_{j})\mid y_{i}=y_{j},s_{i}\neq s_{j}\}, \tag{7}\] where \(\mathbb{G}=\{\mathbf{\mathcal{G}}_{i}|v_{i}\in\mathcal{V}\}\)) and \(m(\cdot,\cdot)\) is a metric of measuring the distance between a pair of subgraphs. Nevertheless, the problem of computing the distance of pairs of graphs is inefficient and infeasible due to the complex graph structure and large search space (Zhu et al., 2019). As we already have node representations \(\mathbf{h}_{i}=[\mathbf{c}_{i},\mathbf{c}_{i}]\) that capture the graph structure and node attribute information, we propose to measure the distance in the latent space, which can greatly reduce the computation burden. Then the counterfactual graph searching problem in Eq. (7) and Eq. (8) is converted to the problem below: \[\mathbf{h}_{i}^{c} =\operatorname*{arg\,min}_{\mathbf{h}_{j}\in\mathbb{H}}\{\left\| \mathbf{h}_{i}-\mathbf{h}_{j}\right\|_{2}^{2}\mid y_{i}\neq y_{j},s_{i}=s_{j}\}, \tag{10}\] \[\mathbf{h}_{i}^{e} =\operatorname*{arg\,min}_{\mathbf{h}_{j}\in\mathbb{H}}\{\left\| \mathbf{h}_{i}-\mathbf{h}_{j}\right\|_{2}^{2}\mid y_{i}=y_{j},s_{i}\neq s_{j}\}, \tag{9}\] where \(\mathbb{H}=\{\mathbf{h}_{i}|v_{i}\in\mathcal{V}\}\) and we use L2 distance to find counterfactuals. A problem is that we only have limited labels in the training set. So we first pre-train the backbone model. With pre-trained model, we can obtain the prediction for unlabeled nodes as pseudo-labels. The pseudo-labels work as the guidance of the Figure 2. An illustration of (a) our proposed framework; (b) intuition of counterfactual augmented learning. counterfactual searching problem. Note that for each factual input we can also get multiple counterfactuals by selecting a set of counterfactuals in Eq. (9) and Eq. (10) instead of one. Thus, the counterfactual \(\mathbf{G}^{e}_{i}\) can be naturally extended to a set of \(K\) counterfactuals \(\{\mathbf{g}^{e_{k}}_{i}|k=1,...,K\}\) and \(\mathbf{G}^{e}_{i}\) can be extended to \(\{\mathbf{g}^{e_{k}}_{i}|k=1,...,K\}\). We fix \(K\) to 10 in our implementation. We can utilize the counterfactuals to supervise the disentanglement of \(\mathbf{e}_{i}\) and \(\mathbf{e}_{i}\). Specifically, as shown in Figure 2 (b), counterfactual \(\mathbf{g}^{e_{k}}_{i}\) shares the same content information with factual graph \(\mathbf{G}_{i}\) and has different environment information. Without supervision, the factual content representation \(\mathbf{c}_{i}\) and the counterfactual content representation \(\mathbf{c}^{e_{k}}_{i}\) may contain both the content information and environment information. When we minimize the discrepancy of the learned representations with \(\text{dis}(\mathbf{c}_{i},\mathbf{c}^{e_{k}}_{i})\), \(f_{\theta}\) will tend to merely keep the content information and squeeze the sensitive information out of learned representations. In a similar manner, we can use \(\text{dis}(\mathbf{e}_{i},\mathbf{c}^{e_{k}}_{i})\) to make the environment representation \(\mathbf{e}_{i}\) be invariant to the content information stored in \(\mathbf{c}_{i}\). Also, we put the orthogonal constraint here to encourage \(\mathbf{c}_{i}\) and \(\mathbf{e}_{i}\) to store different information in representation space. The invariance constraint is given as: \[\begin{split}\mathcal{L}_{\text{inv}}=\frac{1}{|\mathbf{V}|\cdot K }\sum_{\mathbf{v}_{i}\in\mathcal{V}}\sum_{k=1}^{K}\Big{[}&\ \text{dis}(\mathbf{c}_{i},\mathbf{c}^{e_{k}}_{i})+\text{dis}(\mathbf{c}_{i},\mathbf{c}^{e_{k}} _{i})\\ &+\gamma K\cdot|\cos(\mathbf{c}_{i},\mathbf{c}_{i})|\Big{]},\end{split} \tag{11}\] where \(\text{dis}(\cdot,\cdot)\) is a distance metric, such as the cosine distance and L2 distance in our implementation. \(|\cos(\cdot,\cdot)|\) is the absolute value of cosine similarity and we optimize this term to approximate \(\mathbf{c}^{T}_{i}\mathbf{e}_{i}=0\). \(\gamma\) is the hyper-parameter to control the orthogonal constraint. ### Final Objective Function of CAF Putting the disentangled representation learning module and the counterfactual selection module together, the final objective function of the proposed CAF framework is: \[\min_{\theta,\phi}\mathcal{L}=\mathcal{L}_{\text{pred}}\,+\alpha\mathcal{L}_{ \text{inv}}\,+\beta\mathcal{L}_{\text{suf}}\,, \tag{12}\] where \(\theta\) and \(\phi\) are parameters for the GNN encoder and the prediction head, respectively. \(\alpha\) and \(\beta\) are hyper-parameters controlling the invariance constraint and the sufficiency constraint. ### Training Algorithm The whole process of CAF is summarized in Algorithm 1. Our method relies on the counterfactuals in the representation space to guide the disentanglement. However, the randomly initialized representation at the first several epochs may degrade the performance of our model. Therefore, we first pre-train a plain node representation learning model \(\mathbf{Y}=g_{\Theta,\Phi}(\mathbf{A},\mathbf{X})\) only with \(\mathcal{L}_{\text{pred}}\). Then we use the optimized parameters \(\Theta^{*},\Phi^{*}=\min_{\Theta,\Phi}\mathcal{L}_{\text{pred}}\) to initialize the parameters \(\theta\) and \(\phi\) of our model and use the aforementioned framework to get the desired disentangled representations. We do not necessarily update the counterfactuals for each epoch. We update the counterfactuals once for \(t\) epochs and \(t=10\) in our implementation. As shown in Algorithm 1, we first pre-train \(g_{\Theta,\Phi}\) and use the optimized parameter to initialize \(f_{\theta}\) and \(\phi\) from line 1 to line 1. Then we iteratively optimize \(f_{\theta}\) and \(\phi\) from line 1 to line 1. In each iteration, we first perform forward propagation to get node representations in line 1. And then for each \(t\) epoch we update the selected counterfactuals once from line 1 to line 1. Afterwards, we compute the overall objective and perform backpropagation to optimize the parameters \(\theta\) and \(\phi\) from line 1 to line 1. After training, we obtain the desired fair model \(f_{\theta}\) and \(f_{\phi}\) in line 1. ``` 1:\(g=(\mathcal{V},\mathcal{E},\mathbf{X})\), \(\mathbf{Y}_{L}\), \(t\), \(\alpha\), \(\beta\), \(\gamma\), \(K\) and num_epoch 2:\(f_{\theta}\) and \(f_{\phi}\) 3: Pre-train \(g_{\Theta,\Phi}\) based on \(\mathcal{L}_{\text{pred}}\) with Eq. (5) 4: Use the optimized \(\Theta^{*}\) and \(\Phi^{*}\) to initialize \(f_{\theta}\) and \(f_{\phi}\) 5:for epoch in range(num_epoch)do 6: Compute representations \(\mathbf{H}\) and predicted labels \(\hat{\mathbf{Y}}\) with \(f_{\theta}\) and \(f_{\phi}\) by Eq. (3) and Eq. (4) 7:if\(num\_epoch\) % \(t=0\)then 8: Obtain two sets of counterfactuals \(\left\{\mathbf{g}^{e_{k}}_{i}\mid k=1,\dots,K\right\}\) and \(\left\{\mathbf{g}^{e_{k}}_{i}\mid k=1,\dots,K\right\}\) with Eq. (9) and (10). 9:endif 10: Compute the overall objective \(\mathcal{L}\) with Eq. (12) 11: Update \(\theta\) and \(\phi\) according to the objective \(\mathcal{L}\) 12:endfor 13:return\(f_{\theta}\) and \(f_{\phi}\) ``` **Algorithm 1**Training Algorithm of CAF. ## 5. Experiments In this section, we conduct experiments to evaluate the effectiveness of the proposed method and compare it with state-of-the-art fair GNNs. Specifically, we aim to answer the following questions: * **(RQ 1)** How effective is the proposed CAF for fair node classification task on both synthetic datasets and real-world datasets? * **(RQ 2)** Can the proposed CAF find appropriate counterfactuals? * **(RQ 3)** How do the proposed modules work? How can each regularization term affect the model performance? ### Experiment Settings #### 5.1.1. Real-World Datasets We conduct experiments on three widely used real-world datasets, namely German Credit (Cheng et al., 2017), Credit Defaulter (Zi et al., 2018), Bai (Zi et al., 2018). The statistics of the datasets can be found in Table 2. The details of the datasets are as follows: * **German Credit**(Cheng et al., 2017): the nodes in the dataset are clients and two nodes are connected if they have high similarity of the credit accounts. The task is to classify the credit risk level as high or low with the sensitive attribute "gender". * **Credit Defaulter**(Zi et al., 2018): the nodes in the dataset are used to represent the credit card users and the edges are formed based on the similarity of the payments information. The task is to classify the default payment method with sensitive attribute "age". * **Bail**(Zi et al., 2018): these datasets contain defendants released on bail during 1990-2009 as nodes. The edges between two nodes are connected based on the similarity of past criminal records and demographics. The task is to classify whether defendants are on bail or not with the sensitive attribute "race". #### 5.1.2. Synthetic Dataset Real-world datasets do not offer ground-truth counterfactuals, prompting us to construct a synthetic dataset based on the Structural Causal Model (SCM) as depicted in Figure 1. The primary advantage of a synthetic dataset is that it provides us with ground-truth counterfactuals for each node, which enables us to assess the quality of the obtained counterfactuals. In our approach, we consider settings with binary sensitive attributes and binary labels. A graph with 2000 nodes is sampled in our implementation. To generate the desired counterfactuals, we maintain the same sampled value of noise variables and use consistent causal relationships for each node. The sensitive attributes and labels are sampled from two different Bernoulli distributions, with \(s_{i}\sim\mathcal{B}(p)\) and \(y_{i}\sim\mathcal{B}(q)\), respectively. This results in generating vectors \(\mathbf{s}_{i}=[(s_{i})_{\times N}]\) and \(\mathbf{y}_{i}=[(y_{i})_{\times N}]\). Next, environment and content features, \(\mathbf{e}_{i}\) and \(\mathbf{e}_{i}\), are sampled from normal distributions \(\mathbf{e}_{i}\sim\mathcal{N}(s_{i},\mathbf{I})\) and \(\mathbf{e}_{i}\sim\mathcal{N}(s_{i},\mathbf{I})\), respectively. These features are combined to form the overall latent feature \(\mathbf{z}_{i}=[\mathbf{e}_{i},\mathbf{e}_{i}]\). The observed feature for each node \(v_{i}\), denoted as \(\mathbf{x}_{i}\), is computed as \(\mathbf{x}_{i}=\mathbf{W}\mathbf{z}_{i}+\mathbf{b}_{i}\), where \(\mathbf{W}_{ij}\sim\mathcal{N}(1,1)\), and \(\mathbf{W}\in\mathbb{R}^{d_{2}\times 2d_{1}}\), with \(\mathbf{b}_{i}\sim\mathcal{N}(0,\mathbf{I})\in\mathbb{R}^{d_{2}}\). The adjacency matrix \(\mathbf{A}\) is defined such that \(\mathbf{A}_{ij}=1\) if \(\sigma(\cos(\mathbf{z}_{i},\mathbf{z}_{j})+\epsilon_{ij})\geq\alpha\) and \(i\neq j\), with \(\epsilon_{ij}\sim\mathcal{N}(0,1)\), and \(\mathbf{A}_{ij}=0\) otherwise. Here, \(\sigma(\cdot)\) denotes the Sigmoid function, and the threshold \(\alpha\) controls the edge number. We have the freedom to set sensitive attribute probability \(p\), label probability \(q\), latent feature dimension \(2d_{1}\), observed feature dimension \(d_{2}\), node number \(N\), and threshold \(\alpha\) to control the biased graph generation process. Note that in the SCM we have \(C\to Y\) instead of \(Y\to C\), thus a better way is to first generate content features and then assign labels to the features. Intuitively, we argue that when using an optimal classifier to deal with content features with different means will assign the same label in our generation process. Therefore, to simplify the generation process, we use \(C\to Y\) in our dataset design. The synthetic dataset comes with notable advantages. Firstly, it gives us access to exact counterfactuals. After generating the initial graph, we keep all noise variables and unrelated variables unchanged, then adjust the sensitive attribute \(s_{i}\) or label \(y_{i}\) to calculate the precise counterfactual through the same graph generation procedure. Secondly, the synthetic dataset enables adjustable bias levels, providing us control over the extent of bias in our models. As a result, we can undertake a comprehensive and detailed evaluation of our model's fairness and prediction quality. #### 5.1.3. Baselines To evaluate the effectiveness of CAF, we include representative and state-of-the-art methods, which can be categorized into three categories: (1) _plain node classification methods_: GCN (Ghezani et al., 2017), GraphSAGE (Ghezani et al., 2017) and GIN (Zhou et al., 2017); (2) _fair node classification methods_: FairGNN (Ghezani et al., 2017), EDITS (Ghezani et al., 2017); (3) _graph counterfactual fairness methods_: NIFTY (Ghezani et al., 2017) and GEAR (Zhou et al., 2017). Unless otherwise specified, we use GraphSAGE as the model backbone except for baseline GCN and GIN. We use SAGE to denote GraphSAGE. The detailed descriptions about the datasets are as follows: * GCN (Ghezani et al., 2017): GCN is a popular spectral GNN, which adopts a localized first-order approximation of spectral graph convolutions. * GraphSAGE (Ghezani et al., 2017): GraphSAGE is a method for inductive learning that leverages node feature information to generate unsupervised embeddings for nodes in large graphs, even if they were not included in the initial training. * GIN (Zhou et al., 2017): Graph Isomorphism Network (GIN) is a graph-based neural network that can capture different topological structures by injecting the node's identity into its aggregation function. * FairGNN (Ghezani et al., 2017): FairGNN uses adversarial training to achieve fairness on graphs. It trains the learned representation via an adversary which is optimized to predict the sensitive attribute. * EDITS (Ghezani et al., 2017): EDITS is a pre-processing method for fair graph learning. It aims to delbias the input network to remove the sensitive information in the graph data. * NIFTY (Ghezani et al., 2017): It simply performs a flipping on the sensitive attributes to get counterfactual data. It regularizes the model to be invariant to both factual and counterfactual data samples. * GEAR (Zhou et al., 2017): GEAR is a method for counterfactual fairness on graphs. It utilizes a variational auto-encoder to synthesize counterfactual samples to achieve counterfactual fairness for graphs. #### 5.1.4. Evaluation Metrics We evaluate the model performance from three perspectives: classification performance, group fairness and \begin{table} \begin{tabular}{c|c c c c|c c|c c|c} \hline **Dataset** & **Metrics** & **GCN** & **GraphSAGE** & **GIN** & **FairGNN** & **EDITS** & **NIFTY** & **GEAR** & **CAF** \\ \hline \multirow{8}{*}{**German**} & AUC (\(\uparrow\)) & 74.00\({}_{\pm 1.51}\) & **74.54\({}_{\pm 0.86}\)** & 72.69\({}_{\pm 1.02}\) & 65.85\({}_{\pm 9.49}\) & 69.76\({}_{\pm 5.46}\) & 72.05\({}_{\pm 2.15}\) & 65.80\({}_{\pm 3.00}\) & 71.87\({}_{\pm 1.33}\) \\ & F1 (\(\uparrow\)) & 80.05\({}_{\pm 1.20}\) & 81.15\({}_{\pm 0.97}\) & **82.62\({}_{\pm 1.55}\)** & 82.29\({}_{\pm 0.32}\) & 81.04\({}_{\pm 1.09}\) & 79.20\({}_{\pm 1.19}\) & 78.04\({}_{\pm 2.07}\) & 82.16\({}_{\pm 0.22}\) \\ & \(\Delta_{SP}\) (\(\downarrow\)) & 41.94\({}_{\pm 5.52}\) & 23.79\({}_{\pm 6.70}\) & 14.85\({}_{\pm 4.64}\) & 7.65\({}_{\pm 8.07}\) & 8.42\({}_{\pm 7.35}\) & 7.74\({}_{\pm 7.80}\) & 8.60\({}_{\pm 3.47}\) & **6.60\({}_{\pm 1.66}\)** \\ & \(\Delta_{EO}\) (\(\downarrow\)) & 31.11\({}_{\pm 4.40}\) & 15.13\({}_{\pm 5.74}\) & 8.26\({}_{\pm 6.72}\) & 4.18\({}_{\pm 8.36}\) & 5.69\({}_{\pm 1.60}\) & 5.17\({}_{\pm 2.38}\) & 6.34\({}_{\pm 2.31}\) & **1.58\({}_{\pm 1.14}\)** \\ \hline \multirow{8}{*}{**Bail**} & AUC (\(\uparrow\)) & 88.50\({}_{\pm 1.30}\) & 90.50\({}_{\pm 2.10}\) & 77.30\({}_{\pm 6.90}\) & 88.20\({}_{\pm 3.50}\) & 89.07\({}_{\pm 2.26}\) & **92.04\({}_{\pm 0.89}\)** & 89.60\({}_{\pm 1.60}\) & 91.39\({}_{\pm 0.34}\) \\ & F1 (\(\uparrow\)) & 78.20\({}_{\pm 2.30}\) & 80.40\({}_{\pm 3.20}\) & 6.50\({}_{\pm 8.04}\) & 78.40\({}_{\pm 2.12}\) & 77.83\({}_{\pm 3.79}\) & 77.81\({}_{\pm 6.06}\) & 80.00\({}_{\pm 3.10}\) & **80.00\({}_{\pm 9.08}\)** \\ & \(\Delta_{SP}\) (\(\downarrow\)) & 7.50\({}_{\pm 1.40}\) & 8.60\({}_{\pm 3.90}\) & 6.50\({}_{\pm 3.40}\) & 7.40\({}_{\pm 2.02}\) & 3.74\({}_{\pm 3.43}\) & 5.74\({}_{\pm 3.50}\) & 5.70\({}_{\pm 3.50}\) & **7.29\({}_{\pm 1.06}\)** \\ & \(\Delta_{EO}\) (\(\downarrow\)) & 2.30\({}_{\pm 1.90}\) & 3.90\({}_{\pm 2.20}\) & 4.10\({}_{\pm 2.30}\) & 4.60\({}_{\pm 1.30}\) & 4.46\({}_{\pm 3.50}\) & 4.07\({}_{\pm 1.28}\) & 1.90\({}_{\pm 2.30}\) & **1.17\({}_{\pm 3.22}\)** \\ \hline \multirow{8}{*}{**Credit**} & AUC (\(\uparrow\)) & 68.40\({}_{\pm 1.90}\) & **75.60\({}_{\pm 1.10}\)** & 70 counterfactual fairness. **(i)** For _classification performance_, we use AUC and the F1 score to measure node classification performance. **(ii)** For _fairness_, following (Bang et al., 2018), we adopt two commonly used group fairness metrics, i.e., statistical parity (SP) \(\Delta_{SP}\) and equal opportunity (EO) \(\Delta_{EO}\), which are computed as \(\Delta_{SP}=|P(\hat{y}_{u}=1\mid s=0)-P(\hat{y}_{u}=1\mid s=1)|\) and \(\Delta_{EO}=|P(\hat{y}_{u}=1\mid y_{u}=1,s=0)-P(\hat{y}_{u}=1\mid y_{u}=1,s=1)|\). The smaller \(\Delta_{SP}\) and \(\Delta_{EO}\) are, the fairer the model is. **(iii)** For _counterfactual fairness_, as we have the ground-truth counterfactuals on the synthetic dataset, Following (Bang et al., 2018), we use the counterfactual fairness metric \(\delta_{CF}\), i.e., \(\delta_{CF}=|P\left((\hat{y}_{i})_{S\leftarrow s}\mid X,\mathbf{A}\right)-P \left((\hat{y}_{i})_{S\leftarrow s^{\prime}}\mid X,\mathbf{A}\right)|\), where \(s,s^{\prime}\in\{0,1\}^{N}\) are the sensitive attributes and \(s^{\prime}=1-s\). \((\hat{y}_{i})_{S\leftarrow s^{\prime}}\) is the computed ground-truth counterfactual label with the same data generation process as shown in Figure 1. We use subscript \(S\gets s^{\prime}\) to denote counterfactual computation (Zhu et al., 2017), i.e., keeping the same data generation process and values of random noise variable. Counterfactual fairness of the graph is only measured on synthetic dataset. #### 5.1.5. Setup For German Credit, Credit Defaulter and Bail, we follow train/valid/test split in (Bang et al., 2018). For the constructed synthetic dataset, we use a 50/25/25 split for training/validation/testing data. We randomly initialize the parameters. For each combination of the hyper-parameters configuration, we run the experiments with 10 random seeds and grid search for the best configuration. ### Performance Comparison To answer RQ1, we conduct experiments on real-world datasets and synthetic dataset with comparison to baselines. #### 5.2.1. Performance on Real-World Datasets Table 1 shows the average performance with standard deviation of ten runs on real-world datasets. The best results are highlighted in **bold** and the runner-up results are underlined. From Table 1, we observe: * CAF can improve the group fairness performance. Across three datasets, Table 1 shows CAF can make fairer predictions than other baseline methods. CAF beats all the baselines with respect to the group fairness metrics. * There exists a trade-off between group fairness and prediction performance. Plain node classification methods, such as GCN, GraphSAGE and GIN, tend to have better prediction performance and worse group fairness performance. Fair node classification methods, including FairGNN, EDITS, NIFTY, GEAR and CAF, tend to suffer from a prediction performance drop and the group fairness performance is better. * CAF achieves best performance on the prediction-fairness trade-off. We use the average rank of two prediction metrics and two group fairness metrics to know the performance of the trade-off. Our model ranks 1.75 and the runner-up model ranks 3.83. Our model outperforms the state-of-the-art node representation learning methods, which shows the effectiveness of our model. * Graph counterfactual fairness methods, such as NIFTY, GEAR and CAF, achieved better performance than other baselines. Correlation-based counterfactual notions can capture the causal relationships and help to boost the group fairness performance. #### 5.2.2. Performance on Synthetic Dataset Figure 3 reports the performance on the synthetic dataset. On the synthetic dataset, we have the desired ground-truth counterfactuals, which can be used to measure the performance of graph counterfactual fairness. We compare our model with plain node classification models and counterfactual fairness models. The observations are as follows: * CAF beats all the models with respect to the prediction, group fairness and counterfactual fairness metrics. We argue that in our assumed biased generation process, our model can effectively find invariant, sufficient and informative representations to make accurate and fair predictions. * Other graph counterfactual fairness-based methods, including NIFTY and GEAR, cannot consistently outperform other methods. These methods design their model without considering meaningful causal relationships. NIFTY simply perturbs the sensitive attribute and omits the further influence on features and graph structure. GEAR adopts an GraphVAE to model the causal relationships, which may fail to generate meaningful counterfactuals. ### Flexibility of CAF for Various Backbones To show the flexibility of CAF in improving the fairness of various backbones while maintaining high classification accuracy, other than GraphSAGE, we also plug our model in GCN and GIN. Figure 4 shows the classification performance and fairness performance on Bail and Credit. From Figure 4, we observe that compared with the backbones, CAF can significantly improve the fairness with no or marginal decrease in classification performance. For example, on Bail dataset, the prediction performance with GIN backbone drops by 0.54% on AUROC but the \(\Delta_{SP}\) drops by 1.37% and the \(\Delta_{EO}\) drops by 0.86%, which is an improvement on fairness performance. This demonstrates the flexibility of CAF in benefiting various backbones. ### Quality of Counterfactuals To answer RQ2, we compare the counterfactuals obtained by CAF with ground-truth counterfactuals to investigate whether we can Figure 4. Comparison of the prediction performance and fairness performance of different backbones. Figure 3. Node classification performance and group fairness performance on synthetic datasets. obtain the desired counterfactuals. We conduct experiments on the synthetic dataset which has ground-truth counterfactuals. We first use CAF to obtain counterfactuals. To measure the discrepancy of the obtained counterfactuals with respect to the feature and structure in the ego graph, we compare the learned counterfactual representations and the ground-truth counterfactual representations. We compare our model with two graph counterfactual fairness baselines, i.e., NIFTY (Bengio et al., 2017) and GEAR (Zhu et al., 2017). NIFTY simply flips the sensitive attributes to get their counterfactuals. GEAR uses a GraphVAE to generate the counterfactuals based on self-perturbation and neighbor-perturbation. Figure 5 shows the average result for all the nodes on the synthetic dataset. We show that CAF can find better counterfactuals than other graph counterfactual fairness models, i.e., smaller discrepancy to ground-truth counterfactuals. The result also shows there is still space for existing methods to improve the performance of getting appropriate counterfactuals. ### Ablation Study In our model, the pre-trained model can provide pseudo-labels for the nodes in the unlabeled set. Thus, we can select counterfactuals from the entire dataset. The model trained from scratch, without any pre-training, is denoted as CAF-NP. Without pseudo-labels, we can only select counterfactuals from the training set, which is denoted as the variant CAF-NS. We evaluate the performance on synthetic dataset. The results are reported in Table 3. We find the model CAF-NS performs worse than the CAF but better than CAF-NP. The result shows the pseudo-labels can also boost the performance of our model. Usually, the training set is small and the model may not obtain desired counterfactuals from the limited data points. Although pseudo-labels may contain some noisy information, they can also help improve the our model performance. We further delve into how the constraints impact performance. When merely setting \(\alpha=0\) or \(\beta=0\), we denote the model as CAF-NA and CAF-NB, respectively. The models CAF-NA and CAF-NB outperform SAGE, yet fall short when compared to CAF. This indicates that both the sufficiency and invariance constraints collectively contribute to the superior performance of our model. ### Hyper-Parameter Sensitivity Analysis There are two important hyperparameters in CAF, i.e., \(\alpha\) and \(\beta\). \(\alpha\) controls the contribution of the invariance regularization \(\mathcal{L}_{\text{inv}}\) and \(\beta\) controls the contribution of the sufficiency regularization. To understand the impact of \(\alpha\) and \(\beta\) on CAF, we fix \(\beta\) as 5 and vary \(\alpha\) as \(\{0,1,\dots,18\}\). Similarly, we fix \(\alpha\) as 1 and vary \(\beta\) as \(\{0,1,\dots,18\}\). We report the result on German dataset in Figure 6. From Figure 6, we have the following observations: there exists a trade-off between prediction performance and fairness performance. The trend is that when we increase the \(\alpha\) and \(\beta\), we will get worse prediction performance and better fairness performance. We argue that without these regularizations, the model may rely on sensitive information. When we decrease the regularization budget, we can disentangle content representations. Thus, the prediction performance will get worse and the fairness performance will be better. ## 6. Conclusion and Future Work In this paper, we study the problem of learning fair node representations with GNNs. We first formalize the biased graph generation process with an SCM. Motivated by causal theory, we propose a novel framework CAF to learn fair node presentations which meet graph counterfactual fairness criteria and can achieve good prediction-fairness performance. Specifically, we align the model design with the data generation process and convert the problem to learn content representations. We derive several properties of the optimal content representation from the causal graph, i.e., invariance, sufficiency and informativeness. To get appropriate supervision for the invariance regularization, we design a counterfactual selection module. Extensive experiments demonstrate that CAF can achieve state-of-the-art performance on synthetic dataset and real-world datasets with respect to the prediction-fairness trade-off. There are several interesting directions worth exploring. First, in this paper, we mainly focus on binary classification and binary sensitive attribute. We will extend the work to multi-class classification and multi-category sensitive attributes. Second, in this paper, we focus on static graphs while there are many different kinds of graphs in real-world. Thus, we aim to extend our model to more complex graph learning settings, such as dynamic graphs, multi-value sensitive attributes and labels. ## Acknowledgments This material is based upon work supported by, or in part by the National Science Foundation (NSF) under grants number IIS-1909702, IIS-2153326, and IIS-2212145, Army Research Office (ARO) under grant number W911NF-21-1-0198, Department of Homeland Security (DHS) CINA under grant number E205949D, and Cisco Faculty Research Award. Figure 5. Discrepancy between learned counterfactual representation and ground-truth counterfactual representation. Figure 6. Hyper-parameter study on German dataset. \begin{table} \begin{tabular}{l c c c c c} \hline \hline \multirow{2}{*}{**Models**} & \multicolumn{5}{c}{**Synthetic Dataset**} \\ \cline{2-7} & **AUROC \(\uparrow\)** & **F1 \(\uparrow\)** & \(\delta_{\text{CP}}\downarrow\) & \(\Delta_{\text{DP}}\downarrow\) & \(\Delta_{\text{BD}}\downarrow\) \\ \hline SAGE & 98.40\(\pm\)0.31 & 86.89\(\pm\)5.59 & 11.76\(\pm\)2.92 & 1.87\(\pm\)0.82 & 1.85\(\pm\)1.41 \\ CAF & 99.57\(\pm\)0.06 & 94.58\(\pm\)0.58 & 7.21\(\pm\)0.61 & 0.65\(\pm\)0.41 & 1.66\(\pm\)0.57 \\ CAF-NA & 99.33\(\pm\)2.92 & 94.72\(\pm\)2.15 & 10.42\(\pm\)1.98 & 1.63\(\pm\)0.41 & 1.83\(\pm\)0.74 \\ CAF-NB & 98.67\(\pm\)1.02 & 88.64\(\pm\)4.31 & 8.31\(\pm\)1.22 & 9.91\(\pm\)0.28 & 1.63\(\pm\)0.51 \\ CAF-NP & 98.36\(\pm\)3.99 & 87.42\(\pm\)2.07 & 13.47\(\pm\)2.62 & 1.37\(\pm\)0.55 & 1.96\(\pm\)1.78 \\ CAF-NS & 98.81\(\pm\)0.73 & 91.98\(\pm\)3.25 & 9.72\(\pm\)2.65 & 1.35\(\pm\)0.18 & 1.73\(\pm\)0.55 \\ \hline \hline \end{tabular} \end{table} Table 3. Ablation study.
2303.01406
Sparse-penalized deep neural networks estimator under weak dependence
We consider the nonparametric regression and the classification problems for $\psi$-weakly dependent processes. This weak dependence structure is more general than conditions such as, mixing, association, $\ldots$. A penalized estimation method for sparse deep neural networks is performed. In both nonparametric regression and binary classification problems, we establish oracle inequalities for the excess risk of the sparse-penalized deep neural networks estimators. Convergence rates of the excess risk of these estimators are also derived. The simulation results displayed show that, the proposed estimators overall work well than the non penalized estimators.
William Kengne, Modou Wade
2023-03-02T16:53:51Z
http://arxiv.org/abs/2303.01406v1
# Sparse-penalized deep neural networks estimator under weak dependence ###### Abstract We consider the nonparametric regression and the classification problems for \(\psi\)-weakly dependent processes. This weak dependence structure is more general than conditions such as, mixing, association, \(\ldots\). A penalized estimation method for sparse deep neural networks is performed. In both nonparametric regression and binary classification problems, we establish oracle inequalities for the excess risk of the sparse-penalized deep neural networks estimators. Convergence rates of the excess risk of these estimators are also derived. The simulation results displayed show that, the proposed estimators overall work well than the non penalized estimators. _Keywords:_ Deep neural network, \(\psi\)-weakly dependence, sparsity, penalization, convergence rate. William Kengne 1 and Modou Wade 2 Footnote 1: Developed within the ANR BREAKRISK: ANR-17-CE26-0001-01 and the CY Initiative of Excellence (grant ”Investissements d’Avenir” ANR-16-IDEX-0008), Project ”EcoDep” PSI-AAP2020-0000000013 Footnote 2: Supported by the MME-DII center of excellence (ANR-11-LABEX-0023-01) _THEMA, CY Cergy Paris Universite, 33 Boulevard du Port, 95011 Cergy-Pontoise Cedex, France_ _E-mail: [email protected] ; [email protected]_ ## 1 Introduction Deep learning has received a considerable attention in the literature and has shown great success in several applications in artificial intelligence, for example, image processing (see ([15])), speech recognition (see([7])) \(\cdots\). In the recent decades, many researchers have contributed to understand the theoretical properties of deep neural networks (DNNs) predictors with the sparse regularization. See, for instance, [18], [20], [1], [21], [22], [12][9] (and the references therein) for some results with independent and identical distribution (i.i.d.) observations, and [2], [14], [16], [17], [11], [10] for some results with dependent or non-i.i.d. observations. Sparse-penalized DNNs estimators have been studied, among others, by [19], [16]. These authors establish some oracle-type inequalities for i.i.d. data and dependent observations under a mixing condition. Let us consider the set of observations \(D_{n}\coloneqq\{(X_{1},Y_{1}),\cdots,(X_{n},Y_{n})\}\) (the training sample) from a stationary and ergodic process \(\{Z_{t}=(X_{t},Y_{t}),t\in\mathbb{Z}\}\), which takes values in \(\mathcal{Z}=(\mathcal{X}\times\mathcal{Y})\), where \(\mathcal{X}\) is the input space and \(\mathcal{Y}\) the output space. Consider for the sequel, a loss function \(\ell:\mathbb{R}\times\mathcal{Y}\to[0,\infty)\). We perform sparse DNNs predictors for nonparametric regression and classification tasks, based on a penalized empirical risk minimization procedure. The estimator \(\widehat{h}_{n}\in\mathcal{H}_{\sigma}(L_{n},N_{n},B_{n},F)\) is also called, the sparse-penalized DNN (SPDNN) predictor, and is given by \[\widehat{h}_{n}=\operatorname*{argmin}_{h\in\mathcal{H}_{\sigma}(L_{n},N_{n},B_ {n},F)}\left[\frac{1}{n}\sum_{i=1}^{n}\ell(h(X_{i}),Y_{i})+J_{n}(h)\right], \tag{1.1}\] where \(\mathcal{H}_{\sigma}(L_{n},N_{n},B_{n},F)\) is a class of DNNs (see (3.3)) with an activation function \(\sigma\) and with suitably chosen architecture parameters \(L_{n},N_{n},B_{n},F\); and \(J_{n}(h)\) is the sparse penalty given by \[J_{n}(h)\coloneqq J_{\lambda_{n},\tau_{n}}(h)\coloneqq\lambda_{n}\|\theta(h) \|_{\operatorname{clip},\tau_{n}},\] for tuning parameters \(\lambda_{n}>0,\;\tau_{n}>0\) and \(\theta(h)\) is the vector of parameters of \(h\). \(\|\cdot\|_{\operatorname{clip},\tau}\) denotes the clipped \(L_{1}\) norm with a clipping threshold \(\tau>0\) see [24] defined as \[\|\theta\|_{\operatorname{clip},\tau}=\sum_{j=1}^{p}\left(\frac{|\theta_{j}|}{ \tau}\wedge 1\right),\] where \(\theta=(\theta_{1},\cdots,\theta_{p})^{\prime}\) is a \(p\)-dimensional vector and \({}^{\prime}\) denotes the transpose. For both nonparametric regression and classification under weak dependence, we focus on the calibration of the parameters \(L_{n},N_{n},B_{n},F\) of the networks class, and \(\lambda_{n}\), \(\tau_{n}\) of the penalty term, that allow the SPDNN predictor \(\widehat{h}_{n}\) to enjoy an oracle property and to derive its convergence rate. These issues have been addressed by [19] in the i.i.d. case. They have established an oracle inequality of the excess risk and have proved that, the SPDNN can adaptively attain minimax optimality. [16] carried out SPDNN estimator for nonparametric time series regression under \(\beta\)-mixing condition. They provided a generalization error bound of the SPDNN estimator and proved that, this estimator attains the minimax optimal rate up to a poly-logarithmic factor. Besides addressing the time series classification issue, the \(\psi\)-weak dependence considered here is more general than mixing conditions (see [3]). In this new contribution, we consider SPDNNs estimators for learning a \(\psi\)-weakly dependent process \(\{Z_{t}=(X_{t},Y_{t}),\;t\in\mathbb{Z}\}\) with value in \(Z=\mathcal{X}\times\mathcal{Y}\subseteq\mathbb{R}^{d}\times\mathbb{R}\), based on the training sample \(D_{n}=(X_{1},Y_{1}),\cdots,(X_{n},Y_{n})\) and we address the following issues. * **Oracle inequality for the nonparametric time series regression**. We provide conditions on \(L_{n},N_{n},B_{n},F,\lambda_{n}\), \(\tau_{n}\), and establish an oracle inequality of the \(L_{2}\) error of the SPDNN estimator. * **Oracle inequality for the binary time series classification**. Conditions on \(L_{n},N_{n},B_{n},F,\lambda_{n}\), \(\tau_{n}\) are provided, and an oracle inequality of the excess risk of the SPDNN estimator is established. * **Convergence rates of the excess risk**. For both nonparametric regression and time series classification, the convergence rate (which depends on \(L_{n},N_{n},B_{n},F,\lambda_{n},\tau_{n}\)) of the excess risk is derived. When the true regression function (in the regression problem) and the target function (in the classification task) are sufficiently smooth, these rates are close to \(\mathcal{O}(n^{-1/2})\). The rest of the paper is organized as follows. In Section 2, we set some notations and assumptions. Section 3 defines the class of DNNs considered. Section 4 is devoted to the nonparametric regression whereas Section 5 focuses on the binary time series classification. Some simulation results are provided in Section 6 and Section 7 is devoted to the proof of the main results. ## 2 Notations and assumptions For two separable Banach spaces \(E_{1},E_{2}\) equipped with norms \(\|\cdot\|_{E_{1}}\) and \(\|\cdot\|_{E_{2}}\) respectively, denote by \(\mathcal{F}(E_{1},E_{2})\), the set of measurable functions from \(E_{1}\) to \(E_{2}\). For any \(h\in\mathcal{F}(E_{1},E_{2})\) and \(\epsilon>0\), \(B(h,\epsilon)\) denotes the ball of radius \(\epsilon\) of \(\mathcal{F}(E_{1},E_{2})\) centered at \(h\), that is, \[B(h,\epsilon)=\big{\{}f\in\mathcal{F}(E_{1},E_{2}),\ \|f-h\|_{\infty}\leq \epsilon\big{\}},\] where \(\|\cdot\|_{\infty}\) stands for the sup-norm defined below. Let \(\mathcal{H}\subset\mathcal{F}(E_{1},E_{2})\), the \(\epsilon\)-covering number \(\mathcal{N}(\mathcal{H},\epsilon)\) of \(\mathcal{H}\) is the minimal number of balls of radius \(\epsilon\) needed to cover \(\mathcal{H}\); that is, \[\mathcal{N}(\mathcal{H},\epsilon)=\inf\Big{\{}m\geq 1\ :\exists h_{1},\cdots,h_{ m}\in\mathcal{H}\ \text{such that}\ \mathcal{H}\subset\bigcup_{i=1}^{m}B(h_{i},\epsilon)\Big{\}}.\] For a function \(h:E_{1}\to E_{2}\) and \(U\subseteq E_{1}\), define, \[\|h\|_{\infty}=\sup_{x\in E_{1}}\|h(x)\|_{E_{2}},\ \|h\|_{\infty,U}=\sup_{x\in U} \|h(x)\|_{E_{2}}\ \text{and}\] \[\text{Lip}_{\alpha}(h)\coloneqq\sup_{x_{1},x_{2}\in E_{1},\ x_{1}\neq x_{2}} \frac{\|h(x_{1})-h(x_{2})\|_{E_{2}}}{\|x_{1}-x_{2}\|_{E_{1}}^{\alpha}}\ \text{for any}\ \alpha\in[0,1].\] For any \(\mathcal{K}_{\ell}>0\) and \(\alpha\in[0,1]\), \(\Lambda_{\alpha,\mathcal{K}_{\ell}}(E_{1},E_{2})\) (simply \(\Lambda_{\alpha,\mathcal{K}_{\ell}}(E_{1})\) when \(E_{2}\subseteq\mathbb{R}\)) is the set of functions \(h:E_{1}^{u}\to E_{2}\) for some \(u\in\mathbb{N}\), satisfies \(\|h\|_{\infty}<\infty\) and \(\text{Lip}_{\alpha}(h)\leq\mathcal{K}_{\ell}\). When \(\alpha=1\), we set \(\text{Lip}_{1}(h)=\text{Lip}(h)\) and \(\Lambda_{1}(E_{1})=\Lambda_{1,1}(E_{1},\mathbb{R})\). We define the weak dependence structure, see [6] and [3]. Let \(E\) be a separable Banach space. **Definition 2.1**.: _An \(E-valued\)\(process\ (Z_{t})_{t\in\mathbb{Z}}\) is said to be \((\Lambda_{1}(E),\psi,\epsilon)-\)weakly dependent \(\ if\ there\ exists\ a\ function\ \ \psi:[0,\infty)^{2}\times\mathbb{N}^{2}\to[0,\infty)\ and\ a\ sequence\epsilon=(\epsilon(r))_{r\in\mathbb{N}}\) decreasing to zero at \(infinity\ such\ that,\ for\ any\ g_{1},\ g_{2}\in\Lambda_{1}(E)\ with\ g_{1}:E^{u}\to\mathbb{R},\ g_{2}:E^{v}\to\mathbb{R}\,\ (u,v\in\mathbb{N})\ and\ for\ any\ u-tuple\ (s_{1},\cdots,s_{u})\) and any \(v-tuple\ (t_{1},\cdots,t_{v})\ with\ s_{1}\leq\cdots\leq s_{u}\leq s_{u}+r\leq t_{1}\leq \cdots\leq t_{v},\ the\ following\ inequality\ is\ fulfilled\):_ \[|\text{Cov}(g_{1}(Z_{s_{1}},\cdots,Z_{s_{u}}),g_{2}(Z_{t_{1}},\cdots,Z_{t_{v}} ))|\leq\psi(\text{Lip}(g_{1}),\text{Lip}(g_{2}),u,v)\epsilon(r).\] For example, following choices of \(\psi\) leads to some well-known weak dependence conditions. * \(\psi\left(\text{Lip}(g_{1}),\text{Lip}(g_{2}),u,v\right)=v\text{Lip}(g_{2})\): the \(\theta\)-weak dependence, then denote \(\epsilon(r)=\theta(r)\); * \(\psi\left(\text{Lip}(g_{1}),\text{Lip}(g_{2}),u,v\right)=u\text{Lip}(g_{1})+ v\text{Lip}(g_{2})\): the \(\eta\)-weak dependence, then denote \(\epsilon(r)=\eta(r)\); * \(\psi\left(\text{Lip}(g_{1}),\text{Lip}(g_{2}),u,v\right)=uv\text{Lip}(g_{1}) \cdot\text{Lip}(g_{2})\): the \(\kappa\)- weak dependence, then denote \(\epsilon(r)=\kappa(r)\); * \(\psi\left(\text{Lip}(g_{1}),\text{Lip}(g_{2}),u,v\right)=u\text{Lip}(g_{1}) +v\text{Lip}(g_{2})+uv\text{Lip}(g_{1})\cdot\text{Lip}(g_{2})\): the \(\lambda\)-weak dependence, then denote \(\epsilon(r)=\lambda(r)\). We consider the process \(\{Z_{t}=(X_{t},Y_{t}),t\in\mathbb{Z}\}\) with values in \(\mathcal{Z}=\mathcal{X}\times\mathcal{Y}\subset\mathbb{R}^{d}\times\mathbb{R}\), the loss function \(\ell:\mathbb{R}\times\mathcal{Y}\to[0,\infty)\), the class of DNNs \(\mathcal{H}_{\sigma}(L_{n},N_{n},B_{n},F)\) with the activation function \(\sigma:\mathbb{R}\to\mathbb{R}\), and set of following assumptions. **(A1)**: : There exists a constant \(C_{\sigma}>0\) such that the activation function \(\sigma\in\Lambda_{1,C_{\sigma}}(\mathbb{R})\). * : There exists \(\mathcal{K}_{\ell}>0\) such that, the loss function \(\ell\in\Lambda_{1,\mathcal{K}_{\ell}}(\mathbb{R}\times\mathcal{Y})\) and \(M=\sup_{h\in\mathcal{H}_{\sigma}(L_{n},N_{n},B_{n},F)}\sup_{z\in\mathcal{Z}}| \ell(h,z)|<\infty\). In the case of margin-based loss, these conditions are assumed to the function \((u,y)\mapsto\ell(uy)\). Let us set the weak dependence assumption. * : The process \(\{Z_{t}=(X_{t},Y_{t}),t\in\mathbb{Z}\}\) is stationary ergodic and \((\Lambda_{1}(\mathcal{Z}),\psi,\epsilon)\)-weakly dependent with \(\epsilon_{r}=\mathcal{O}(r^{-\gamma})\) for some \(\gamma>3\). ## 3 Deep Neural Networks A DNN, with \((L,\mathbf{p})\) network architecture, where \(L\in\mathbb{N}\) is the number of hidden-layers and \(\mathbf{p}=(p_{0},\cdots,p_{L+1})\in\mathbb{N}^{L+2}\) the width vector, is any function \(h\) of the form, \[h:\mathbb{R}^{p_{0}}\rightarrow\mathbb{R}^{p_{L+1}},\ x\mapsto h (x)=A_{L+1}\circ\sigma_{L}\circ A_{L}\circ\sigma_{L-1}\circ\cdots\circ\sigma _{1}\circ A_{1}(x), \tag{3.1}\] where \(A_{j}:\mathbb{R}^{p_{j-1}}\rightarrow\mathbb{R}^{p_{j}}\) is a linear affine map, defined by \(A_{j}(x):=W_{j}x+\mathbf{b}_{j}\), for given \(p_{j-1}\times p_{j}\) weight matrix \(W_{j}\) and a shift vector \(\mathbf{b}_{j}\in\mathbb{R}^{p_{j}}\) and \(\sigma_{j}:\mathbb{R}^{p_{j}}\rightarrow\mathbb{R}^{p_{j}}\) is a nonlinear element-wise activation map, defined by \(\sigma_{j}(z)=(\sigma(z_{1}),\cdots,\sigma(z_{p_{j}}))^{{}^{\prime}}\). For a DNN of the form (3.1), denote by, \[\theta(h)\coloneqq\left(vec(W_{1})^{{}^{\prime}},\mathbf{b}_{1} ^{{}^{\prime}},\cdots,vec(W_{L+1})^{{}^{\prime}},\mathbf{b}_{L+1}^{{}^{\prime} }\right)^{{}^{\prime}}, \tag{3.2}\] the vector of its parameters, where \(vec(W)\) transforms the matrix \(W\) into the corresponding vector by concatenating the column vectors. Let \(\mathcal{H}_{\sigma,p_{0},p_{L+1}}\) be the class of DNNs predictors that take \(p_{0}\)-dimensional input to produce \(p_{L+1}\)-dimensional output and use the activation function \(\sigma:\mathbb{R}\rightarrow\mathbb{R}\). In our setting here, \(p_{0}=d\) and \(p_{L+1}=1\). For a DNN \(h\) as in (3.1), let \(\mathrm{depth}(h)\) and \(\mathrm{width}(h)\) be respectively the depth and width of \(h\); that is, \(\mathrm{depth}(h)\)\(=L\) and \(\mathrm{width}(h)=\max\limits_{1\leq j\leq L}p_{j}\). For any positive constants \(L,N,B\) and \(F\), we set \[\mathcal{H}_{\sigma}(L,N,B)\coloneqq\big{\{}h\in\mathcal{H}_{ \sigma,q,1}:\mathrm{depth}(h)\leq L,\mathrm{width}(h)\leq N,\|\theta(h)\|_{ \infty}\leq B\big{\}},\] and \[\mathcal{H}_{\sigma}(L,N,B,F)\coloneqq\big{\{}h:h\in H_{\sigma}(L,N,B),\|h\|_{\infty,\mathcal{X}}\leq F\big{\}}. \tag{3.3}\] A class of sparsity constrained DNNs with sparsity level \(S>0\) is defined by \[\mathcal{H}_{\sigma}(L,N,B,F,S)\coloneqq\left\{h\in\mathcal{H} _{\sigma}(L,N,B,F)\ :\ \|\theta(h)\|_{0}\leq S\right\}, \tag{3.4}\] where \(\|x\|_{0}=\sum_{i=1}^{p}\mathds{1}(x_{i}\neq 0)\) for all \(x=(x_{1},\ldots,x_{p})^{\prime}\in\mathbb{R}^{p}\) (\(p\in\mathbb{N}\)). In the sequel, we will establish some theoretical results for the SPDNN estimators (1.1) in both regression and classification problems under the \(\psi\)-weak dependence. ## 4 Nonparametric regression In this section we aim to study the nonparametric time series regression, with the output \(Y_{t}\in\mathbb{R}\) and the input \(X_{t}\in\mathbb{R}^{d}\) are generated from the model, \[Y_{t}=h^{*}(X_{t})+\epsilon_{t},\ X_{0}\sim\mathrm{P}_{X_{0}}, \tag{4.1}\] where \(h^{*}:\mathbb{R}^{d}\rightarrow\mathbb{R}\) is the unknown regression function and \(\epsilon_{t}\) is an error variable independent of the input variable \(X_{t}\). Let us consider the sub-Gaussian assumption to the error, \[\mathbb{E}[e^{t\epsilon_{t}}]\leq e^{t^{2}\rho^{2}/2}, \tag{4.2}\] for any \(t\in\mathbb{R}\) for some \(\rho>0\). So, denote by \(\mathcal{H}_{\rho,H^{*}}\) the set of distributions \((X_{0},Y_{0})\) satisfying the model (4.1) \[\mathcal{H}_{\rho,H^{*}}\coloneqq\left\{\text{Model}(\ref{eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eqeq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eq:eqeqeq:eq:eqeq:eq:eqeqeq:eqeq:eqeq:eqeqeq:eqeq:eq:eqeq:eq:eqeqeq:eq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eq:eqeqeq:eqeq:eqeqeqeq:eqeqeqeq:eqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeq:eqeqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq _with \(S_{n,\epsilon}\coloneqq C\epsilon^{-\kappa}(\log n)^{r}\) for any \(\epsilon\in(0,\epsilon_{0})\) and \(n\in\mathbb{N}\). Then, the SPDNN estimator defined in (4.4) satisfies_ \[\sup_{H\in\mathcal{H}_{\rho,H^{*}}:h^{*}\in\mathcal{H}^{*}}\mathbb{E}\Big{[}\| \widehat{h}_{n}-h^{*}\|_{2,P_{X_{0}}}^{2}\Big{]}\lesssim\frac{(\log n)^{r+ \nu_{3}}}{n^{\frac{2\nu_{4}}{r+2}}}\vee\frac{(\log n)^{\nu_{5}}}{n^{\nu_{6}}}, \tag{4.7}\] _for some constant \(\nu_{3}>0,\ 0<\nu_{4}<1,\ \nu_{5}>0,\ \nu_{6}>0\ \Big{(}\nu_{6}<1/2,\nu_{4}+\nu_{6}<1 \Big{)}\) or \(\Big{(}\nu_{6}<1/2,\nu_{4}+\nu_{6}=1,\nu_{5}>1-\nu_{3}\Big{)}\)._ **Remark 4.3**.: _Consider the Holder space of smoothness \(s>0\), with radius \(\mathcal{K}>0\) given by,_ \[\mathcal{C}^{s,\mathcal{K}}(\mathcal{X})=\big{\{}h:\mathcal{X}\to\mathbb{R},\ \|h\|_{\mathcal{C}^{s}(\mathcal{X})}\leq\mathcal{K}\big{\}},\] _where \(\|\cdot\|_{\mathcal{C}^{s}(\mathcal{X})}\) denotes the Holder norm defined by,_ \[\|h\|_{\mathcal{C}^{s}(\mathcal{X})}=\sum_{\beta\in\mathbb{N}_{0}^{d},\ |\beta|\leq[s]}\|\partial^{\beta}h\|_{\infty}+\sum_{\beta\in\mathbb{N}_{0}^{d}, \ |\beta|=[s]}Lip_{s-[s]}(\partial^{\beta}h),\] _with, \(\mathbb{N}_{0}=\mathbb{N}\cup\{0\}\), \(|\beta|=\sum_{i=1}^{d}\beta_{i}\) for all \(\beta=(\beta_{1},\ldots,\beta_{d})^{\prime}\in\mathbb{N}_{0}^{d}\), \(\partial^{\beta}\) denotes the partial derivative of order \(\beta\) and \([x]\) denotes the integer part of \(x\). So, if the true regression function satisfies for instance \(h^{*}\in\mathcal{C}^{s,\mathcal{K}}(\mathcal{X})\) for some \(\mathcal{K}>0\), then, condition (4.6) holds for a various classes of activation function, including ReLU with \(\kappa=d/s\) and \(r=1\) (see [11]). In this case, if \(s\gg d\), then, the convergence rate of the SPDNN estimator is close to \(\mathcal{O}(n^{-1/2})\)._ ## 5 Binary time series classification We consider the binary classification of the process \(\{Z_{t}=(X_{t},Y_{t}),t\in\mathbb{Z}\}\), which values in \(\mathbb{R}^{d}\times\{-1,1\}\). The goal is to construct a function \(h:\mathbb{R}^{d}\to\mathbb{R}\) such that, \(h(X_{t})\) is used to predict the label \(Y_{t}\in\{-1,1\}\). We will focus on a margin-based loss function to evaluate the performance of the prediction by \(h\); ; that is, a loss function of the form: \((u,y)\mapsto\ell(uy)\). For instance, the hinge loss with \(\ell(uy)=\max(1-uy,0)\). We assume in the sequel that, the input \(X_{t}\in\mathbb{R}^{d}\) and the output \(Y_{t}\in\{-1,1\}\) are generated from the model \[Y_{t}|X_{t}=\mathrm{x}\sim 2\mathcal{B}(\eta(\mathrm{x}))-1,\qquad X_{0}\sim \mathrm{P}_{X_{0}}, \tag{5.1}\] where \(\eta(\mathrm{x})=P(Y_{t}=1|X_{t}=\mathrm{x})\) and \(\mathcal{B}(\eta(\mathrm{x}))\) is the Bernoulli distribution with parameter \(\eta(\mathrm{x})\). The aim is to build a classifier \(h\) so that, the excess risk of \(h\) defined by, \[\mathcal{E}_{Z_{0}}(h)\coloneqq\mathbb{E}[\ell(Y_{0}h(X_{0}))]-\mathbb{E}[\ell (Y_{0}h_{\ell}^{*}(X_{0}))],\ \text{with}\ Z_{0}\coloneqq(X_{0},Y_{0}) \tag{5.2}\] is "close" to zero, where \(\ell\) is a given margin-based loss function, \(h_{\ell}^{*}=\underset{h\in\mathcal{F}}{\mathrm{argmin}}\mathbb{E}\left[\ell(Y _{0}h(X_{0}))\right]\) is a target function, suppose to be bound, that is \(\|h_{\ell}^{*}\|_{\infty}\leq H^{*}\) for some \(H^{*}>0\) and \(\mathcal{F}\) is the set of measurable functions from \(\mathbb{R}^{d}\) to \(\mathbb{R}\). For the sequel, define the set of distributions satisfying this assumption by, \[\mathcal{Q}_{H^{*}}\coloneqq\{Model(\ref{eq:1}):\|h_{\ell}^{*}\|_{\infty}\leq H ^{*}\}. \tag{5.3}\] The following theorem provides an oracle inequality for the excess risk of the SPDNN estimator based on a margin-based loss function \(\ell\); whose, the strict convexity is not required as in [19]. **Theorem 5.1**.: _Assume that (**A1**)-(**A3**) hold and that the true generative model \(H\) is in \(\mathcal{Q}_{H^{*}}\). Let \(F>0\), \(L_{n}\lesssim\log n,N_{n}\lesssim n^{\nu_{1}},1\leq B_{n}\lesssim n^{\nu_{2}}\) for some \(\nu_{1}>0,\ \nu_{2}>0\). Then, the SPDNN estimator defined by,_ \[\widehat{h}_{n}=\operatorname*{argmin}_{h\in\mathcal{H}_{\sigma}(L_{n},N_{n},B _{n},F)}\left[\frac{1}{n}\sum_{i=1}^{n}\ell(Y_{i}h(X_{i}))+\lambda_{n}\|\theta (h)\|_{\text{clip},\tau_{n}}\right], \tag{5.4}\] _with \(\lambda_{n}\asymp(\log n)^{\nu_{3}}/n,\ \tau_{n}\leq\frac{\beta_{n}}{4 \mathcal{K}_{\ell}(L_{n}+1)((N_{n}+1)B_{n})^{L_{n}+1}}\) for \(\beta_{n}\coloneqq(\log n)^{\nu_{5}}/n^{\nu_{6}},\ \mathcal{K}_{\ell}>0\), satisfies_ \[\mathbb{E}\left[\mathcal{E}_{Z_{0}}(\widehat{h}_{n})\right]\leq 2\inf_{h\in \mathcal{H}_{\sigma}(L_{n},N_{n},B_{n},F)}\left\{\mathcal{E}_{Z_{0}}(h)+ \lambda_{n}\|\theta(h)\|_{\text{clip},\tau_{n}}\right\}\vee\frac{C(\log n)^{ \nu_{5}}}{n^{\nu_{6}}}, \tag{5.5}\] _for some universal constant \(C>0,\ \nu_{5}>0,\ \nu_{6}>0,\ \left(\nu_{6}<1/2,\nu_{4}+\nu_{6}<1\right)\) or \(\left(\nu_{6}<1/2,\nu_{4}+\nu_{6}=1,\nu_{5}>1-\nu_{3}\right)\), where the expectation is taken over the training data._ As Theorem 4.2 in the nonparametric regression, the following theorem provides a useful tool to derive a convergence rate of the SPDNN estimator. **Theorem 5.2**.: _Assume the conditions of Theorem 5.1, with similar choices of \(F,L_{n},N_{n},B_{n},\lambda_{n},\tau_{n}\). Let \(H^{*}>0\) and let \(\mathcal{H}^{*}\) be a set of real-valued functions on \(\mathbb{R}^{d}\) and assume there are constants \(\kappa>0,\ r>0,\epsilon_{0}>0\), and \(C>0\) such that,_ \[\sup_{h^{\circ}\in\mathcal{H}^{*}:\|h^{\circ}\|_{\infty}\leq H^{*}\ \ h\in\mathcal{H}_{\sigma}(L_{n},N_{n},B_{n},F,S_{n,\epsilon})}\|h-h^{\circ} \|_{1,P_{X_{0}}}\leq\epsilon, \tag{5.6}\] _with \(S_{n,\epsilon}\coloneqq C\epsilon^{-\kappa}(\log n)^{r}\) for any \(\epsilon\in(0,\epsilon_{0})\) and \(n\in\mathbb{N}\). Then, SPDNN estimator defined in (5.4) satisfies,_ \[\sup_{H\in\mathcal{Q}_{H^{*}}:\widehat{h}_{\ell}^{*}\in\mathcal{H}^{*}} \mathbb{E}\left[\mathcal{E}_{Z_{0}}(\widehat{h}_{n})\right]\lesssim\frac{( \log n)^{r+\nu_{3}}}{n^{\frac{\nu_{4}}{\epsilon+1}}}\vee\frac{(\log n)^{\nu_{5 }}}{n^{\nu_{6}}}. \tag{5.7}\] As stressed in Remark 4.3, such result can be used to derive a convergence rate of the excess risk of the SPDNN estimator in several situations. ## 6 Some numerical results In this section, we carry out the prediction of autoregressive models and binary time series by SPDNN. ### Prediction of autoregressive models Let us consider a nonlinear autoregressive process with exogenous covariate \((Y_{t},\mathcal{X}_{t})_{t\in\mathbb{Z}}\) with values in \(\mathbb{R}\times\mathbb{R}\) satisfying: \[Y_{t}=f(Y_{t-1},\ldots,Y_{t-p};\mathcal{X}_{t-1})+\epsilon_{t}, \tag{6.1}\] for some measurable function \(f:\mathbb{R}^{p+1}\to\mathbb{R}\) (\(p\in\mathbb{N}\)), where \((\epsilon_{t})_{t\in\mathbb{Z}}\) is i.i.d. generated from a standardized uniform distribution \(\mathcal{U}[-2;2]\) and \((\mathcal{X}_{t})_{t\in\mathbb{Z}}\) is an AR (1) process generated with a standardized \(\mathcal{U}[-2;2]\) innovations. The process \((Y_{t},\mathcal{X}_{t})_{t\in\mathbb{Z}}\) in (6.1) is a specific example of the affine causal models with exogenous covariates studied in [4]. In the sequel, we set \(X_{t}=(Y_{t-1},\cdots,Y_{t-p},\mathcal{X}_{t-1})\). So, one can see that, this nonlinear autoregressive process is a particular case of the model (4.1). If \(f\in\Lambda_{1}(\mathbb{R}^{p+1})\) (that is, \(f\) is Lipschitz) and under some classical conditions on the Lipschitz-type coefficients of \(f\) (for example, \(Lip(f)<1/p\)), then, there exists a solution \((Y_{t},X_{t})_{t\in\mathbb{Z}}\) from (6.1), that fulfills the assumption **(A3)** above; see details in [5]. Let \((Y_{1},\mathcal{X}_{1}),\cdots,(Y_{n},\mathcal{X}_{n})\) be a trajectory of the process \((Y_{t},\mathcal{X}_{t})_{t\in\mathbb{Z}}\). We aim to predict \(Y_{n+1}\) from this training sample. We perform the learning theory with SPDNN predictors developed above with, the input variable \(X_{t}=(Y_{t-1},\mathcal{X}_{t-1})\), the input space \(\mathcal{X}\subset\mathbb{R}^{p}\times\mathbb{R}\) and, the output space \(\mathcal{Y}\subset\mathbb{R}\). We consider the following cases in (6.1): \[\text{DGP1}:\;\;Y_{t}=1-0.2Y_{t-1}+0.3Y_{t-2}+0.25Y_{t-3}-0.6\frac{ 1}{1+\mathcal{X}_{t-1}^{2}}+\epsilon_{t};\] \[\text{DGP2}:\;\;Y_{t}=0.5+\big{(}-0.4+0.25e^{-2Y_{t-1}^{2}}\big{)} Y_{t-1}+1.5\mathcal{X}_{t-1}+\epsilon_{t}.\] DGP1 is a classical autoregressive model with nonlinear covariate, whereas DGP2 is an exponential autoregression with covariate. For each of these DGPs, we perform a network architecture of 2 hidden layers with 100 hidden nodes for each layer. The ReLU and linear activation functions are used respectively in the hidden layers and the output layer. The network weights are trained in the R software with the package Keras, by using the algorithm Adam ([13]) with learning rate \(10^{-3}\) and the minibatch size of 32. The training is stopped when the mean square error (MSE) is not improved within 30 epochs. For \(n=250,500\) and 1000, a trajectory \(((Y_{1},\mathcal{X}_{1}),(Y_{2},\mathcal{X}_{2}),\ldots,(Y_{n},\mathcal{X}_{ n}))\) is generated from the true DGP. The predictor \(\widehat{h}_{n}\) is obtained from (4.4), with the tuning parameters of the form \(\lambda_{n}=10^{-i}\log(n)/n\) and \(\tau_{n}=10^{-j}/\log(n)\), where \(i,j=0,1,\ldots,10\) are calibrated by minimizing the MSE on a validation data set \(((Y_{1}^{\prime},\mathcal{X}_{1}^{\prime}),(Y_{2}^{\prime},\mathcal{X}_{2}^{ \prime}),\ldots,(Y_{n}^{\prime},\mathcal{X}_{n}^{\prime}))\). The empirical \(L_{2}\) error of \(\widehat{h}_{n}\) is then computed based on a new trajectory \(((Y_{1}^{\prime\prime},\mathcal{X}_{1}^{\prime\prime}),(Y_{2}^{\prime\prime}, \mathcal{X}_{2}^{\prime\prime}),\ldots,(Y_{m}^{\prime\prime},\mathcal{X}_{m}^ {\prime\prime}))\) with \(m=10^{4}\). Figure 1 displays the boxplots of this empirical \(L_{2}\) error of the SPDNN and non penalized DNN (NPDNN, obtained from (4.4) with \(\lambda_{n}=0\)) predictors over 100 replications. We can see that, the SPDNN estimator outperforms the NPDNN estimator in DGP1. In DGP2, the performance of the SPDNN estimator is slightly better than that of the NPDNN estimator. These findings show that, the SPDNN estimator can improve the \(L_{2}\) error, compared to the NPDNN estimator. ### Prediction of binary time series Let us consider a binary autoregressive process with exogenous covariates \((Y_{t},\mathcal{X}_{t})_{t\in\mathbb{Z}}\) with values in \(\{-1,1\}\times\mathbb{R}\) satisfying, \[Y_{t}|\mathcal{F}_{t-1}\sim 2\mathcal{B}(p_{t})-1\text{ with }2p_{t}-1=\mathbb{E}[Y_{t}| \mathcal{F}_{t-1}]=f(Y_{t-1},\cdots,Y_{t-p},;\mathcal{X}_{t-1}), \tag{6.2}\] where \(\mathcal{F}_{t-1}=\sigma\{Y_{t-1},\cdots;\mathcal{X}_{t-1},\cdots\}\) for some measurable function \(f:\mathbb{R}^{p+1}\rightarrow[-1,1]\) (\(p\in\mathbb{N}\)), \((\mathcal{X}_{t})_{t\in\mathbb{Z}}\) is an AR(1) process and \(\mathcal{B}(p_{t})\) denotes the Bernoulli distribution with parameter \(p_{t}\). Set \(X_{t}=(Y_{t-1},\cdots,Y_{t-p},\mathcal{X}_{t-1})\). Therefore, one can see that, the binary model (6.2) is a specific case of (5.1). Under some classical Lipschitz-type condition on \(f\), the process \((Y_{t},\mathcal{X}_{t})_{t\in\mathbb{Z}}\) fulfills the weak dependence assumption **(A3)**, see [11]. Let \((Y_{1},\mathcal{X}_{1}),\cdots,(Y_{n},\mathcal{X}_{n})\) be a trajectory of the process \((Y_{t},\mathcal{X}_{t})_{t\in\mathbb{Z}}\), and the aim is to predict \(Y_{n+1}\) from this training sample. We perform the learning theory with SPDNN predictor proposed here with \((Y_{t-1},\mathcal{X}_{t-1})\) in the DGP3, and \(p=2,\ X_{t}=(Y_{t-1},Y_{t-2},\mathcal{X}_{t-1})\) in the DGP4. We study the following cases of (6.2): \[\text{DGP3}: f(Y_{t-1};\mathcal{X}_{t-1})=-0.15+\left(0.1-0.2e^{-0.5Y_{t-1}^{2} }\right)Y_{t-1}+0.25\frac{1}{1+\mathcal{X}_{t-1}^{2}}\] \[\text{DGP4}: f(Y_{t-1},Y_{t-2};\mathcal{X}_{t-1})=0.1+0.15Y_{t-1}-0.25Y_{t-2 }-0.2e^{-\mathcal{X}_{t-1}^{2}}\] Figure 1: _Boxplots of empirical \(L_{2}\) errors of the SPDNN and NPDNN predictors with \(n=250,500\) and 1000 in DGP1 (a) and DGP2 (b)._ In the following, we consider the hinge loss function (\(\ell(z)=\max(1-z,0)\)) and defined \[h_{\ell}^{*}(X_{t})=2\mathbbm{1}_{\{f(X_{t})\geq 0\}}-1\text{ for all }t\in\mathbb{Z}. \tag{6.3}\] One can see that, \(h_{\ell}^{*}\) is the Bayes classifier with respect to this loss; that is \[\mathcal{E}_{Z_{0}}(h_{\ell}^{*})=\inf_{h\in\mathcal{F}(\mathcal{X},\mathcal{ Y})}\mathcal{E}_{Z_{0}}(h),\] where \(\mathcal{E}_{Z_{0}}(\cdot)\) is defined in (5.2) and \(\mathcal{F}(\mathcal{X},\mathcal{Y})\) is the class of measurable functions from \(\mathcal{X}\) to \(\mathcal{Y}\). Also, for each of these DGPs, we perform a network architecture as in Subsection (6.1). For \(n=250,500\) and \(1000\), a trajectory \(((Y_{1},\mathcal{X}_{1}),\ldots,(Y_{n},\mathcal{X}_{n}))\) is generated from the true DGP. The predictor \(\widehat{h}_{n}\) is obtained from (5.4), where the tuning parameters \(\lambda_{n},\ \tau_{n}\) are chosen as in Subsection (6.1), based on a validation data set \(((Y_{1}^{\prime},\mathcal{X}_{1}^{\prime}),(Y_{2}^{\prime},\mathcal{X}_{2}^{ \prime}),\ldots,(Y_{n}^{\prime},\mathcal{X}_{n}^{\prime}))\). The empirical excess risk of \(\widehat{h}_{n}\) is computed from a new trajectory \(((Y_{1}^{\prime\prime},\mathcal{X}_{1}^{\prime\prime}),(Y_{2}^{\prime\prime},\mathcal{X}_{2}^{\prime\prime}),\ldots,(Y_{m}^{\prime\prime},\mathcal{X}_{m }^{\prime\prime}))\) with \(m=10^{4}\). Figure 2 displays the boxplots of the empirical excess risk of the SPDNN predictor and that of the NPDNN predictor overs \(100\) replications. One can observe that, the performance of the SPDNN overall better than that of the NPDNN. Which shows once again that, the SPDNN estimator can improve the prediction accuracy, compared to the NPDNN estimator. ## 7 Proofs of the main results ### Proof of Theorem 4.1 For the proof, we write \(a_{n}\lesssim_{\rho,H^{*}}b_{n}\) if there is a constant \(C_{\rho,H^{*}}>0\) depending only on \(\rho\) and \(H^{*}\) such that \(a_{n}\leq C_{\rho,H^{*}}b_{n}\) for any \(n\in\mathbb{N}\). Let \(K_{n}\coloneqq(\sqrt{32\rho^{2}}(\log n)^{1/2})\lor H^{*}\). Let \(Y_{0}^{\perp}\coloneqq\operatorname{sign}(Y_{0})(|Y_{0}|\wedge K_{n})\), which is a truncated version of \(Y_{0}\) and \(h^{\perp}\) be the regression function of \(Y_{0}^{\perp}\), that is, \[h^{\perp}(x)\coloneqq\mathbb{E}(Y_{0}^{\perp}|X_{0}=x).\] For national convenience, we suppress the dependency on \(n\) in the notation \(Y_{0}^{\perp}\) and \(h^{\perp}\). As in [19], consider the following decomposition \[\|\widehat{h}_{n}-h^{*}\|_{2,P_{X_{0}}}^{2}=\mathbb{E}\Big{[}\left(Y_{0}- \widehat{h}_{n}(X_{0})\right)^{2}\Big{]}-\mathbb{E}\Big{[}\left(Y_{0}-h^{*}(X _{0})\right)^{2}\Big{]}=\sum_{i=1}^{4}A_{i,n}, \tag{7.1}\] where, \[A_{1,n} \coloneqq\Big{[}\mathbb{E}\left(Y_{0}-\widehat{h}_{n}(X_{0}) \right)^{2}-\mathbb{E}\left(Y_{0}-h^{*}(X_{0})\right)^{2}\Big{]}-\Big{[} \mathbb{E}\left(Y_{0}^{\perp}-\widehat{h}_{n}(X_{0})\right)^{2}-\mathbb{E} \left(Y_{0}^{\perp}-h^{\perp}(X_{0})\right)^{2}\Big{]};\] \[A_{2,n} \coloneqq\Big{[}\mathbb{E}\left(Y_{0}^{\perp}-\widehat{h}_{n}(X_{0 })\right)^{2}-\mathbb{E}\left(Y_{0}^{\perp}-h^{\perp}(X_{0})\right)^{2}\Big{]} -2\Big{[}\frac{1}{n}\sum_{i=1}^{n}\left(Y_{i}^{\perp}-\widehat{h}_{n}(X_{i}) \right)^{2}-\frac{1}{n}\sum_{i=1}^{n}\left(Y_{i}^{\perp}-h^{\perp}(X_{i}) \right)^{2}\Big{]}-2J_{\lambda_{n},\tau_{n}}(\widehat{h}_{n});\] \[A_{3,n} \coloneqq 2\Big{[}\frac{1}{n}\sum_{i=1}^{n}\left(Y_{i}^{\perp}-\widehat{h} _{n}(X_{i})\right)^{2}-\frac{1}{n}\sum_{i=1}^{n}\left(Y_{i}^{\perp}-h^{\perp}(X _{i})\right)^{2}\Big{]}-2\Big{[}\frac{1}{n}\sum_{i=1}^{n}\left(Y_{i}-\widehat{ h}_{n}(X_{i})\right)^{2}-\frac{1}{n}\sum_{i=1}^{n}\left(Y_{i}-h^{*}(X_{i}) \right)^{2}\Big{]};\] \[A_{4,n} \coloneqq 2\left[\frac{1}{n}\sum_{i=1}^{n}\left(Y_{i}-\widehat{h}_{n}(X_{ i})\right)^{2}-\frac{1}{n}\sum_{i=1}^{n}\left(Y_{i}-h^{*}(X_{i})\right)^{2}\right]+2J_{ \lambda_{n},\tau_{n}}(\widehat{h}_{n}).\] The first equality in (7.1) holds by using the independence of \(X_{0}\) and \(\epsilon_{0}\). To bound \(A_{1,n}\), let us recall the properties of sub-Gaussian variables \(\mathbb{E}(\epsilon_{0})=0\) and \(\mathbb{E}(e^{\epsilon_{0}^{2}/4\rho^{2}})\leq\sqrt{2}\), see, for instance, Theorem 2.6 in [23]. Let \[A_{1,1,n} \coloneqq\mathbb{E}\left((Y_{0}^{\perp}-Y_{0})(2\widehat{h}_{n}(X _{0})-Y_{0}-Y_{0}^{\perp})\right);\] \[A_{1,2,n} \coloneqq\mathbb{E}\left[(Y_{0}^{\perp}-h^{\perp}(X_{0})-Y_{0}+h^ {*}(X_{0}))(Y_{0}^{\perp}-h^{\perp}(X_{0})+Y_{0}-h^{*}(X_{0}))\right].\] Figure 2: _Boxplots of the empirical excess risk of the SPDNN and NPDNN predictors with \(n=250,500\) and 1000 in DGP3 (a) and DGP4 (b)._ One can see that \(A_{1,n}=A_{1,1,n}+A_{1,2,n}\). We use the Cauchy-Schwarz inequality to obtain \[|A_{1,1,n}|\leq\sqrt{E(Y_{0}^{\perp}-Y_{0})^{2}}\sqrt{\mathbb{E}(2\widehat{h}_{n} (X_{0})-Y_{0}-Y_{0}^{\perp})^{2}}.\] We have, \[(Y_{0}^{\perp}-Y_{0})^{2} =(|Y_{0}|\wedge K_{n})^{2}-2|Y_{0}|(|Y_{0}|\wedge K_{n})+|Y_{0}|^{2}\] \[=(|Y_{0}|-K_{n})^{2}\mathds{1}(|Y_{0}|>K_{n}).\] Hence, \[\mathbb{E}(Y_{0}^{\perp}-Y_{0})^{2} =\mathbb{E}\left[(|Y_{0}|-K_{n})^{2}\mathds{1}(|Y_{0}|>K_{n})\right]\] \[\leq\mathbb{E}\left[|Y_{0}|^{2}\mathds{1}(|Y_{0}|>K_{n})\right]. \tag{7.2}\] From the assumption (4.3) and the independence of \(X_{0}\) and \(\epsilon_{0}\), we get, \(\mathbb{E}(e^{Y_{0}^{2}/(8\rho^{2})})\leq e^{(H^{*})^{2}/(4\rho^{2})}\mathbb{ E}e^{\epsilon_{0}^{2}/(4\rho)}\leq\sqrt{2}e^{(H^{*})^{2}/(4\rho^{2})}\). Also, one can easily see that, \(Y_{0}^{2}\leq 16\rho^{2}e^{Y_{0}^{2}/16\rho^{2}}\) and \(\mathds{1}(|Y_{0}|>K_{n})\leq e^{(Y_{0}^{2}-K_{n})/16\rho^{2}}\). Therefore, \[\mathbb{E}(Y_{0}^{\perp}-Y_{0})^{2} \leq\mathbb{E}\left[16\rho^{2}e^{Y_{0}^{2}/(16\rho^{2})}e^{Y_{0}^ {2}/(16\rho^{2})-K_{n}^{2}/(16\rho^{2})}\right]\] \[\leq 16\sqrt{2}\rho^{2}e^{(H^{*})^{2}/(4\rho^{2})}e^{-2\log n} \leq 16\sqrt{2}\rho^{2}e^{(H^{*})^{2}/(4\rho^{2})}n^{-2}. \tag{7.3}\] We also have, \[\mathbb{E}(2\widehat{h}_{n}(X_{0})-Y_{0}-Y_{0}^{\perp})^{2}\leq 2\mathbb{E}(Y_ {0}^{2})+2\mathbb{E}(2\widehat{h}_{n}(X_{0})-Y_{0}^{\perp})^{2},\] and, \[|(2\widehat{h}_{n}(X_{0})-Y_{0}^{\perp})^{2}| \leq 4|\widehat{h}_{n}(X_{0})|^{2}+4|\widehat{h}_{n}(X_{0})||Y_{0} ^{\perp}|+|Y_{0}^{\perp}|^{2}\] \[\leq 4K_{n}^{2}+4K_{n}^{2}+K_{n}^{2}.\] Since, there exists a constant \(C\) such that \(K_{n}^{2}\leq C\log n\), we have \[\mathbb{E}(2\widehat{h}_{n}(X_{0})-Y_{0}-Y_{0}^{\perp})^{2} \leq 16\rho^{2}\mathbb{E}(e^{Y^{2}/(8\rho^{2})})+18K_{n}^{2}\] \[\lesssim C_{\rho,H^{*}}\log n. \tag{7.4}\] Hence, \(|A_{1,1,n}|\lesssim C_{\rho,H^{*}}\log n/n\). Let us deal now with \(A_{1,2,n}\). By using the Cauchy-Schwarz inequality, we get, \[|A_{1,2,n}|\leq\sqrt{2\mathbb{E}(Y_{0}^{\perp}-Y_{0})^{2}+2\mathbb{E}(h^{ \perp}(X_{0})-h^{*}(X_{0}))^{2}}\times\sqrt{\mathbb{E}(Y_{0}+Y_{0}^{\perp}-h ^{\perp}(X_{0})-h^{*}(X_{0}))^{2}}.\] In a similar way as in (7.4) we get, \(\mathbb{E}(Y_{0}+Y_{0}^{\perp}-h^{\perp}(X_{0})-h^{*}(X_{0}))^{2}\lesssim_{C_ {\rho},H^{*}}\log n\). Since \(\mathbb{E}(\epsilon_{0})=0\), from the Jensen's inequality, one can easily get, \[\mathbb{E}\big{[}(h^{\perp}(X_{0})-h^{*}(X_{0}))^{2}\big{]}=\mathbb{E}\left( \mathbb{E}(Y_{0}^{\perp}|X_{0})-\mathbb{E}(Y_{0}|X_{0})\right)^{2}\leq\mathbb{ E}(Y_{0}^{\perp}-Y_{0})^{2}.\] By using (7.2), one can obtain \(|A_{1,2,n}|\lesssim C_{\rho,H^{*}}\log n/n\). Now, we have, \[\mathbb{E}[A_{3,n}]=-\frac{2}{n}\sum_{i=1}^{n}\left\{\left[\mathbb{E}[(Y_{i} -\widehat{h}_{n}(X_{i}))^{2}]-\mathbb{E}[(Y_{i}-h^{*}(X_{i}))^{2}]\right]- \left[\mathbb{E}[(Y_{i}^{\perp}-\widehat{h}_{n}(X_{i}))^{2}]-\mathbb{E}[(Y_{i} ^{\perp}-h^{\perp}(X_{i}))^{2}]\right]\right\}.\] For \(i=1,\cdots,n\), set, \[A_{3,n,i} =\left[\mathbb{E}[(Y_{i}-\widehat{h}_{n}(X_{i}))^{2}]-\mathbb{E}[(Y_ {i}-h^{*}(X_{i}))^{2}]\right]-\left[\mathbb{E}[(Y_{i}^{\perp}-\widehat{h}_{n}(X _{i}))^{2}]-\mathbb{E}[(Y_{i}^{\perp}-h^{\perp}(X_{i}))^{2}]\right]\] \[A_{3,1,n,i} =\mathbb{E}\left[(Y_{i}^{\perp}-Y_{i})(2\widehat{h}_{n}(X_{i})-Y _{i}-Y_{i}^{\perp})\right]\] \[A_{3,2,n,i} =\mathbb{E}\left[(Y_{i}^{\perp}-h^{\perp}(X_{i})-Y_{i}+h^{*}(X_{i }))(Y_{i}^{\perp}-h^{\perp}(X_{i})+Y_{i}-h^{*}(X_{i}))\right].\] We have, for \(i=1,\cdots,n\), \[|A_{3,1,n,i}|\leq\sqrt{\mathbb{E}\left[(Y_{i}^{\perp}-Y_{i})^{2}\right]}\sqrt{ \mathbb{E}\left[(2\widehat{h}_{n}(X_{i})-Y_{i}-Y_{i}^{\perp})^{2}\right]}.\] By using similar arguments as in \(A_{1,1,n}\), we get for \(i=1,\cdots,n\), \[|A_{3,1,n,i}|\lesssim C_{\rho,H^{*}}\log n/n.\] Also, by going as in \(A_{1,2,n}\), it holds for \(i=1,\cdots,n\), that, \[|A_{3,2,n,i}|\leq\sqrt{\mathbb{E}\left[(Y_{i}^{\perp}-h^{\perp}(X_{i})-Y_{i}+ h^{*}(X_{i}))^{2}\right]}\sqrt{\mathbb{E}\left[(Y_{i}^{\perp}-h^{\perp}(X_{i})+Y _{i}-h^{*}(X_{i}))^{2}\right]},\] and we can also obtain \(|A_{3,2,n,i}|\lesssim C_{\rho,H^{*}}\log n/n\). Thus, for \(i=1,\cdots,n\), \[|A_{3,n,i}|\lesssim C_{\rho,H^{*}}\log n/n.\] Hence, \[|\mathbb{E}(A_{3,n})|=|-\frac{2}{n}\sum_{i=1}^{n}A_{3,n,i}|\leq\frac{2}{n} \sum_{i=1}^{n}|A_{3,n,i}|\lesssim C_{\rho,H^{*}}\log n/n.\] For \(A_{2,n}\), define \(\Delta h(Z_{0})\coloneqq(Y_{0}^{\perp}-h(X_{0}))^{2}-(Y_{0}^{\perp}-h^{\perp }(X_{0}))^{2}\) with \(Z_{0}\coloneqq(X_{0},Y_{0})\) for \(h\in\mathcal{H}_{\sigma}(L_{n},N_{n},B_{n},F)\). Let \(\alpha>0\), we can write \[P(A_{2,n}>\alpha) \leq P\left(\sup_{h\in\mathcal{H}_{\sigma}(L_{n},N_{n},B_{n},F)} \frac{\mathbb{E}(\Delta h(Z_{0}))-\frac{1}{n}\sum_{i=1}^{n}\Delta h(Z_{i})}{ \alpha+2J_{\lambda_{n},\tau_{n}}+\mathbb{E}(\Delta h(Z_{0}))}\geq\frac{1}{2}\right)\] \[\leq\sum_{j=0}^{\infty}P\left(\sup_{h\in\mathcal{H}_{n,j,\alpha} }\frac{\mathbb{E}(\Delta h(Z_{0}))-\frac{1}{n}\sum_{1=1}^{n}\Delta h(Z_{i})}{2 ^{j}\alpha+\mathbb{E}(\Delta h(Z_{0}))}\geq\frac{1}{2}\right),\] where \[\mathcal{H}_{n,j,\alpha}\coloneqq\Big{\{}h\in\mathcal{H}_{\sigma}(L_{n},N_{n}, B_{n},F):2^{j-1}\mathds{1}(j\neq 0)\alpha\leq J_{\lambda_{n},\tau_{n}}(h)\leq 2^{j} \alpha\Big{\}}. \tag{7.5}\] Indeed, \[P(A_{2,n}>\alpha)\] \[=P\Bigg{(}\left[\mathbb{E}\left(Y_{0}^{\perp}-\widehat{h}_{n}(X_{0}) \right)^{2}-\mathbb{E}\left(Y_{0}^{\perp}-h^{\perp}(X_{0})\right)^{2}\right]-2 \left[\frac{1}{n}\sum_{i=1}^{n}\left(Y_{i}^{\perp}-\widehat{h}_{n}(X_{i}) \right)^{2}-\frac{1}{n}\sum_{i=1}^{n}\left(Y_{i}^{\perp}-h^{\perp}(X_{i}) \right)^{2}\right]\] \[-2J_{\lambda_{n},\tau_{n}}(\widehat{h}_{n})>\alpha\Bigg{)}\] \[\leq P\Bigg{(}2\left[\mathbb{E}\left(Y_{0}^{\perp}-\widehat{h}_{n }(X_{0})\right)^{2}-\mathbb{E}\left(Y_{0}^{\perp}-h^{\perp}(X_{0})\right)^{2} \right]-\left[\frac{1}{n}\sum_{i=1}^{n}\left(Y_{i}^{\perp}-\widehat{h}_{n}(X_{i })\right)^{2}-\frac{1}{n}\sum_{i=1}^{n}\left(Y_{i}^{\perp}-h^{\perp}(X_{i}) \right)^{2}\right]\] \[>\alpha+2J_{\lambda_{n},\tau_{n}}(\widehat{h}_{n})+\mathbb{E} \left(Y_{0}^{\perp}-\widehat{h}_{n}(X_{0})\right)^{2}-\mathbb{E}\left(Y_{0}^ {\perp}-h^{\perp}(X_{0})\right)^{2}\Bigg{)}\] \[\leq P\Bigg{(}\exists h\in\mathcal{H}_{\sigma}(L_{n},N_{n},B_{n},F ):\left[\mathbb{E}\left(Y_{0}^{\perp}-h(X_{0})\right)^{2}-\mathbb{E}\left(Y_ {0}^{\perp}-h^{\perp}(X_{0})\right)^{2}\right]\] \[-\left[\frac{1}{n}\sum_{i=1}^{n}\left(Y_{i}^{\perp}-h(X_{i}) \right)^{2}-\frac{1}{n}\sum_{i=1}^{n}\left(Y_{i}^{\perp}-h^{\perp}(X_{i}) \right)^{2}\right]\] \[>\frac{1}{2}\left(\alpha+2J_{\lambda_{n},\tau_{n}}(h(X_{0}))+ \mathbb{E}\left(Y_{0}^{\perp}-h(X_{0})\right)^{2}-\mathbb{E}\left(Y_{0}^{ \perp}-h^{\perp}(X_{0})\right)^{2}\right)\Bigg{)}\] \[\leq P\Bigg{(}\exists h\in\mathcal{H}_{\sigma}(L_{n},N_{n},B_{n},F):\] \[\leq\sum_{j=0}^{\infty}P\Bigg{(}\sup_{h\in\mathcal{H}_{\sigma}(L_ {n},N_{n},B_{n},F)}\] \[\leq\sum_{j=0}^{\infty}P\Bigg{(}\sup_{h\in\mathcal{H}_{\sigma}(L_ {n},N_{n},B_{n},F)}\] \[\leq\sum_{j=0}^{\infty}P\Bigg{(}\sup_{h\in\mathcal{H}_{n,j,\alpha} }\frac{\mathbb{E}\Delta h(X_{0})-\frac{1}{n}\sum_{i=1}^{n}\Delta h(X_{i})}{ \left(2^{j}\alpha+2J_{\lambda_{n},\tau_{n}}(h(X_{0}))+\mathbb{E}\Delta h(X_{0 })\right)}\geq\frac{1}{2}\Bigg{)}.\] For \(h\in\mathcal{H}_{n,j,\alpha}\), we have \[P\Bigg{(}\sup_{h\in\mathcal{H}_{\sigma}(L_{n},N_{n},B_{n},F)}\frac{\mathbb{E }\Delta h(Z_{0})-\frac{1}{n}\sum_{i=1}^{n}\Delta h(Z_{i})}{\left(\alpha+2J_{ \lambda_{n},\tau_{n}}(h(X_{0}))+\mathbb{E}\Delta h(Z_{0})\right)}\geq\frac{1} {2}\Bigg{)}\leq\sum_{j=0}^{\infty}P\Bigg{(}\sup_{h\in\mathcal{H}_{n,j,\alpha} }\frac{\mathbb{E}\Delta h(Z_{0})-\frac{1}{n}\sum_{i=1}^{n}\Delta h(Z_{i})}{ \left(2^{j}\alpha+\mathbb{E}\Delta h(Z_{0})\right)}\geq\frac{1}{2}\Bigg{)}.\] One can easily show that, \[\mathbb{E}(\Delta h(Z_{0})] =\mathbb{E}[(h(X_{0})-h^{\perp}(X_{0}))^{2}]-\mathbb{E}[(h^{ \perp}(X_{0}))^{2}]+2\mathbb{E}[(h^{\perp}(X_{0}))^{2}]-\mathbb{E}[(h^{\perp} )^{2}(X_{0})]\] \[=\mathbb{E}[(h(X_{0})-h^{\perp}(X_{0}))^{2}]\geq 0.\] Hence, \[\sum_{j=1}^{\infty}P\Bigg{(}\sup_{h\in\mathcal{H}_{n,j,\alpha}}\frac{ \mathbb{E}\Delta h(Z_{0})-\frac{1}{n}\sum_{i=1}^{n}\Delta h(Z_{i})}{(2^{j} \alpha+\mathbb{E}\Delta h(Z_{0}))}\geq\frac{1}{2}\Bigg{)}\leq\sum_{j=1}^{ \infty}P\Bigg{(}\sup_{h\in\mathcal{H}_{n,j,\alpha}}\frac{\mathbb{E}\Delta h(Z_{ 0})-\frac{1}{n}\sum_{i=1}^{n}\Delta h(Z_{i})}{2^{j}\alpha}\geq\frac{1}{2} \Bigg{)}\] \[\leq\sum_{j=1}^{\infty}P\Bigg{(}\sup_{h\in\mathcal{H}_{n,j,\alpha }}\Bigg{\{}\mathbb{E}\Delta h(Z_{0})-\frac{1}{n}\sum_{i=1}^{n}\Delta h(Z_{i}) \Bigg{\}}\geq\frac{2^{j}\alpha}{2}\Bigg{)}\leq\sum_{j=1}^{\infty}P\Bigg{(}\sup _{g\in\mathcal{G}_{n,j,\alpha}}\Bigg{\{}\mathbb{E}[g(Z_{0})]-\frac{1}{n}\sum_ {i=1}^{n}g\Bigg{\}}\geq\frac{2^{j}\alpha}{2}\Bigg{)},\] where, \[\mathcal{G}_{n,j,\alpha}\coloneqq\Big{\{}\Delta(h):\mathbb{R}^{d}\times \mathbb{R}\rightarrow\mathbb{R},h\in\mathcal{H}_{n,j,\alpha}\Big{\}}. \tag{7.6}\] Let \(h\in\mathcal{H}_{\sigma}(L_{n},N_{n},B_{n},F)\). Consider the function \(g(x,y)=\Delta h(x,y)\). One can easily prove that, \(g\) is Lipschitz with Lipschitz coefficient \(\max\big{(}2FLip(h)+2K_{n}Lip(h),6K_{n}+2F\big{)}\). Therefore, one can get that, the process \((g(Z_{t}))_{t\in\mathbb{Z}}\) is also \(\psi\)-weakly dependent. Thus, we have from [8] (see also [5]), \[P\left\{\mathbb{E}[g(Z_{0})]-\frac{1}{n}\sum_{i=1}^{n}g(Z_{i})> \varepsilon\right\} =P\left\{\sum_{i=1}^{n}\left(\mathbb{E}[g(X_{0},Y_{0})]-g(X_{i},Y_ {i})\right)\geq n\varepsilon\right\}\] \[\leq P\left\{\left|\sum_{i=1}^{n}\left(\mathbb{E}[g(X_{0},Y_{0})] -g(X_{i},Y_{i})\right)\right|\geq n\varepsilon\right\}\] \[\leq C_{3}\log n\exp\left(-\frac{n^{2}\varepsilon^{2}}{A_{n}^{ \prime}+B_{n}^{\prime}(n\varepsilon)^{\nu}}\right)\] \[\leq C_{3}\exp\left(\log\log n-\frac{n^{2}\varepsilon^{2}}{A_{n} ^{\prime}+B_{n}^{\prime}(n\varepsilon)^{\nu}}\right). \tag{7.7}\] For a some constant \(C_{3}>0\), any sequence \((A_{n}^{\prime})_{n\in\mathbb{N}}\), satisfying \(A_{n}^{\prime}\geq\mathbb{E}\left[\left(\sum_{i=1}^{n}\left(g(X_{i},Y_{i})- \mathbb{E}[g(X_{0},Y_{0})]\right)\right)^{2}\right]\) and \(B_{n}=\frac{n^{3/4}\log n}{A_{n}^{\prime}}\). Let \(l=\mathcal{N}(\varepsilon,\mathcal{G}_{n,j,\alpha},\|\cdot\|_{\infty})\). For n large enough, we have \[P\left\{\sup_{g\in\mathcal{G}_{n,j,\alpha}}\Bigg{[}\mathbb{E}[g (Z_{0})]-\frac{1}{n}\sum_{i=1}^{n}g(Z_{i})\Bigg{]}>\varepsilon\right\} \leq C_{3}\sum_{i=1}^{l}\exp\left(\log\log n-\frac{n^{2} \varepsilon^{2}/4}{A_{n}^{\prime}+B_{n}^{\prime}(n\varepsilon/2)^{\nu}}\right)\] \[\leq C_{3}\cdot l\exp\left(\log\log n-\frac{n^{2}\varepsilon^{2}/ 4}{A_{n}^{\prime}+B_{n}^{\prime}(n\varepsilon/2)^{\nu}}\right)\] \[\leq C_{3}\mathcal{N}(\varepsilon,\mathcal{G}_{n,j,\alpha})\exp \left(\log\log n-\frac{n^{2}\varepsilon^{2}/4}{A_{n}^{\prime}+B_{n}^{\prime}( n\varepsilon/2)^{\nu}}\right).\] In [19], we have \[\mathcal{N}(\varepsilon,\mathcal{G}_{n,j,\alpha},\|\cdot\|_{\infty})\leq \mathcal{N}(\frac{\varepsilon}{4K_{n}},\mathcal{H}_{n,j,\alpha},\|\cdot\|_{ \infty}). \tag{7.8}\] One can easily see that, \[\mathcal{H}_{n,j,\alpha}\subset\left\{h\in\mathcal{H}_{\sigma}(L_{n},N_{n},B_{ n},F,\frac{2^{j}\alpha}{\lambda_{n}}):\|\theta(h)\|_{\text{clip},\tau_{n}}\leq\frac{2^{j} \alpha}{\lambda_{n}}\right\}. \tag{7.9}\] Thus, in [19], we have the following inequality \[\mathcal{N}(\varepsilon,\mathcal{G}_{n,j,\alpha},\|\cdot\|_{\infty}) \leq\mathcal{N}(\frac{\varepsilon}{4K_{n}},\mathcal{H}_{n,j,\alpha },\|\cdot\|_{\infty})\] \[\leq\mathcal{N}(\frac{\varepsilon}{4K_{n}},\mathcal{H}_{\sigma}(L_ {n},N_{n},B_{n},F,\frac{2^{j}\alpha}{\lambda_{n}}),\|\cdot\|_{\infty})\] \[\leq\exp\left(\,2\frac{2^{j}\alpha}{\lambda_{n}}(L_{n}+1)\log \left(\frac{(L_{n}+1)(N_{n}+1)B_{n}}{\frac{\varepsilon}{4K_{n}}-\tau_{n}(L_{n}+1) ((N_{n}+1)B_{n})^{L_{n}+1}}\right)\right). \tag{7.10}\] We have, \[P\Big{\{}\sup_{g\in\mathcal{G}_{n,j,\alpha}}\left[\mathbb{E}[g(Z_{0}) ]-\frac{1}{n}\sum_{i=1}^{n}g(Z_{i})\right]>\varepsilon\Big{\}}\] \[\leq C_{3}\exp\left(2\frac{2^{j}\alpha}{\lambda_{n}}(L_{n}+1)\log \left(\frac{(L_{n}+1)(N_{n}+1)B_{n}}{\frac{\varepsilon}{4K_{n}}-\tau_{n}(L_{n} +1)((N_{n}+1)B_{n})^{L_{n}+1}}\right)+\log\log n-\frac{n^{2}\varepsilon^{2}/4}{ A_{n}^{\prime}+B_{n}^{\prime}(n\varepsilon/2)^{\nu}}\right). \tag{7.11}\] Hence, \[\sum_{j=1}^{\infty}P\left\{\sup_{g\in\mathcal{G}_{n,j,\alpha}} \left[\mathbb{E}[g(Z_{0})]-\frac{1}{n}\sum_{i=1}^{n}g(Z_{i})\right]>\varepsilon\right\}\] \[\leq C_{3}\sum_{j=1}^{\infty}\exp\left(2\frac{2^{j}\alpha}{ \lambda_{n}}(L_{n}+1)\log\left(\frac{(L_{n}+1)(N_{n}+1)B_{n}}{\frac{ \varepsilon}{4K_{n}}-\tau_{n}(L_{n}+1)((N_{n}+1)B_{n})^{L_{n}+1}}\right)+\log \log n-\frac{n^{2}\varepsilon^{2}/4}{A_{n}^{\prime}+B_{n}^{\prime}(n\varepsilon /2)^{\nu}}\right). \tag{7.12}\] Let \(\varepsilon=\frac{2^{j}\alpha}{2}\), we have \[\sum_{j=1}^{\infty}P\Big{\{}\sup_{g\in\mathcal{G}_{n,j,\alpha}} \left[\mathbb{E}[g(Z_{0})]-\frac{1}{n}\sum_{i=1}^{n}g(Z_{i})\right]>\frac{2^{ j}\alpha}{2}\Big{\}}\] \[\leq C_{3}\sum_{j=1}^{\infty}\exp\left(2\frac{2^{j}\alpha}{ \lambda_{n}}(L_{n}+1)\log\left(\frac{(L_{n}+1)(N_{n}+1)B_{n}}{\frac{2^{j} \alpha}{8K_{n}}-\tau_{n}(L_{n}+1)((N_{n}+1)B_{n})^{L_{n}+1}}\right)+\log\log n -\frac{(n2^{j}\alpha)^{2}/16}{A_{n}^{\prime}+B_{n}^{\prime}(n2^{j}\alpha/4)^ {\nu}}\right)\] \[\leq C_{3}\exp(\log\log n)\sum_{j=1}^{\infty}\exp\left(2\frac{2^{ j}\alpha}{\lambda_{n}}(L_{n}+1)\log\left(\frac{(L_{n}+1)(N_{n}+1)B_{n}}{ \frac{2^{j}\alpha}{8K_{n}}-\tau_{n}(L_{n}+1)((N_{n}+1)B_{n})^{L_{n}+1}} \right)-\frac{(n2^{j}\alpha)^{2}/16}{A_{n}^{\prime}+B_{n}^{\prime}(n2^{j} \alpha/4)^{\nu}}\right). \tag{7.13}\] By with choice \(A_{n}^{\prime}=nC\) and \(B_{n}^{\prime}=\log(\frac{n}{n^{1/4C}})\), we have \[\sum_{j=1}^{\infty}P\Big{\{}\sup_{g\in\mathcal{G}_{n,j,\alpha}} \left[\mathbb{E}[g(Z_{0})]-\frac{1}{n}\sum_{i=1}^{n}g(Z_{i})\right]>\frac{2^{ j}\alpha}{2}\Big{\}}\] \[\leq C_{3}\exp(\log\log n)\] \[\leq C_{3}\exp(\log\log n)\] \[\times\sum_{j=1}^{\infty}\exp\left(2\frac{2^{j}\alpha}{\lambda_{n} }(L_{n}+1)\log\left(\frac{(L_{n}+1)(N_{n}+1)B_{n}}{\frac{2^{j} \alpha}{8K_{n}}-\tau_{n}(L_{n}+1)((N_{n}+1)B_{n})^{L_{n}+1}}\right)-\frac{(n2 ^{j}\alpha)^{2}/16}{nC+\log(\frac{n}{n^{1/4C}})(n2^{j}\alpha/4)^{\nu}}\right). \tag{7.14}\] Let, \[\log(\frac{n}{n^{1/4}C})(n2^{j}\alpha/8)^{\nu}>nC\Longrightarrow\alpha>\frac{ 8}{n}\left(\frac{nC}{\log(\frac{n}{n^{1/4C}})}\right)^{1/\nu}\coloneqq\alpha_ {n}. \tag{7.15}\] We can easily see that from (7.15) \[-\frac{(n2^{j}\alpha)^{2}/16}{nC+\log(\frac{n}{n^{1/4C}})(n2^{j}\alpha/8)^{ \nu}}\leq-\frac{(n2^{j}\alpha)^{2}/16}{2\log(\frac{n}{n^{1/4C}})(n2^{j}\alpha/4 )^{\nu}}.\] We can see also asymptotically \(\alpha_{n}>1\). **Step \(\mathbf{1}:\alpha>\alpha_{n}\)** Under the assumption \[\tau_{n}\leq\frac{1}{16K_{n}(L_{n}+1)((N_{n}+1)B_{n})^{L_{n}+1}},\] And for \(2^{j}\alpha>1\), we have \[\sum_{j=1}^{\infty}\exp\left(2\frac{2^{j}\alpha}{\lambda_{n}}(L_{ n}+1)\log\left(\frac{(L_{n}+1)(N_{n}+1)B_{n}}{2^{j}\alpha-\tau_{n}(L_{n}+1)((N_{n}+1 )B_{n})^{L_{n}+1}}\right)-\frac{(n2^{j}\alpha)^{2}/16}{nC+\log(\frac{n}{n^{1/4 }C})(n2^{j}\alpha/4)^{\nu}}\right)\] \[\leq\exp\left(2\frac{2^{j}\alpha}{\lambda_{n}}(L_{n}+1)\log\left( 16K_{n}(L_{n}+1)(N_{n}+1)B_{n}\right)-\frac{(n2^{j}\alpha)^{2-\nu}/16}{2\log( \frac{n}{n^{1/4}C})/4^{\nu}}\right). \tag{7.16}\] For \(\alpha>\alpha_{n}\) we have \(2^{j}\alpha<((2\alpha)^{2-\nu})^{j}\) Thus, \[\sum_{j=1}^{\infty}P\Big{\{}\sup_{g\in\mathcal{G}_{n,j,\alpha}} \left[\mathbb{E}[g(Z_{0})]-\frac{1}{n}\sum_{i=1}^{n}g(Z_{i})\right]>\frac{2^{j }\alpha}{2}\Big{\}}\] \[\leq\sum_{j=1}^{\infty}\exp\left(2^{j}\alpha\left(\frac{2}{ \lambda_{n}}(L_{n}+1)\log\left(16K_{n}(L_{n}+1)(N_{n}+1)B_{n}\right)-\frac{n^{ 2-\nu}/32}{\log(\frac{n}{n^{1/4}C})/4^{\nu}}\right)\right). \tag{7.17}\] We can see that for n large enough, \[\delta_{n,1}\coloneqq\frac{2}{\lambda_{n}}(L_{n}+1)\log\left(16K_{n}(L_{n}+1) (N_{n}+1)B_{n}\right)\leq\frac{n^{2-\nu}/32}{2\log(\frac{n}{n^{1/4}C})/4^{\nu}}.\] Thus, \[\sum_{j=1}^{\infty}P\Big{\{}\sup_{g\in\mathcal{G}_{n,j,\alpha}}\left[\mathbb{ E}[g(Z_{0})]-\frac{1}{n}\sum_{i=1}^{n}g(Z_{i})\right]>\frac{2^{j}\alpha}{2} \Big{\}}\leq C_{3}\log n\sum_{j=1}^{\infty}\exp\left(-2^{j}\frac{n^{2-\nu}}{64 \log(\frac{n}{n^{1/4}C})/4^{\nu}}\alpha\right).\] Let \[\beta_{n}\coloneqq\frac{n^{2-\nu}}{64\log\left(\frac{n}{n^{1/4}C}\right)/4^{ \nu}}\alpha,\] we have \[\sum_{j=0}^{\infty}\exp\left(-\beta_{n}(2^{j}\right)\leq\exp\left(-\beta_{n} \right)\sum_{j=0}^{\infty}\left(2^{-\frac{\beta_{n}}{\log 2}}\right)^{j}\leq\exp \left(-\beta_{n}\right)\left(\frac{1}{\begin{array}{c}\\ \\ \\ 1-2\end{array}\frac{\beta_{n}}{\log 2}}\right)\leq\exp\left(-\beta_{n}\right) \tag{7.18}\] By applying (7.18), we have \[\sum_{j=1}^{\infty}\exp\left(-2^{j}\frac{n^{2-\nu}}{64\log(\frac{n}{n^{1/4}C} )/4^{\nu}}\alpha\right)\leq\exp\left(-\frac{n^{2-\nu}}{64\log(\frac{n}{n^{1/4} C})/4^{\nu}}\alpha\right). \tag{7.19}\] Hence \[P(A_{2,n}>\alpha)\lesssim C_{3}\log n\sum_{j=1}^{\infty}\exp\left(-2^{j}\frac{ n^{2-\nu}}{64\log(\frac{n}{n^{1/4}C})/4^{\nu}}\alpha\right)\lesssim C_{3}\log n \exp\left(-\frac{n^{2-\nu}}{64\log(\frac{n}{n^{1/4}C})/4^{\nu}}\alpha\right), \tag{7.20}\] for \(\alpha\geq\alpha_{n}\). \[\int_{\alpha_{n}}^{\infty}P(A_{2,n}>\alpha)d\alpha \leq\int_{\alpha_{n}}^{\infty}C_{3}\log n\exp\left(-\frac{n^{2-\nu} }{64\log(\frac{n}{n^{1/4}C})/4^{\nu}}\alpha\right)d\alpha\] \[\leq 64C_{3}\log n\frac{\log(\frac{n}{n^{1/4}C})}{n^{2-\nu}}\exp \left(-\frac{n^{2-\nu}}{64\log(\frac{n}{n^{1/4}C})/4^{\nu}}\times\frac{8}{n} \left(\frac{nC}{\log\frac{n}{n^{1/4}C}}\right)\right).\] Let \[nC<\log(\frac{n}{n^{1/4}C})(n2^{j}\alpha/4)^{\nu}\Longrightarrow\alpha<\frac{ 8}{n}\left(\frac{nC}{\log(\frac{n}{n^{1/4}C})^{\nu}}\right)^{1/\nu}\coloneqq \alpha_{n}, \tag{7.21}\] Thus, \[-\frac{(n2^{j}\alpha)^{2}/16}{nC+\log(\frac{n}{n^{1/4}C})(n2^{j}\alpha/4)^{ \nu}}\leq-\frac{(n2^{j}\alpha)^{2}/16}{2nC}.\] We can see that asymptotically \(\alpha_{n}>1,\ 2^{j}>1\). Thus, **Step2**: \(1\leq\alpha\leq\alpha_{n}\) Thus we have \[\sum_{j=1}^{\infty}\exp\left(2\frac{2^{j}\alpha}{\lambda_{n}}(L_{ n}+1)\log\left(\frac{(L_{n}+1)(N_{n}+1)B_{n}}{\frac{2^{j}\alpha}{8K_{n}}-\tau_{n} (L_{n}+1)((N_{n}+1)B_{n})^{L_{n}+1}}\right)-\frac{(n2^{j}\alpha)^{2}/16}{nC+ \log(\frac{n}{n^{1/4}C})(n2^{j}\alpha/4)^{\nu}}\right)\] \[\leq\sum_{j=1}^{\infty}\exp\left(2\frac{2^{j}\alpha}{\lambda_{n}} (L_{n}+1)\log\left(16K_{n}(L_{n}+1)(N_{n}+1)B_{n}\right)-\frac{(n2^{j}\alpha) ^{2}/16}{2nC}\right). \tag{7.22}\] For \(1\leq\alpha\leq\alpha_{n}\), we have \(2^{j}\alpha\leq(2^{j}\alpha)^{2}\) \[\sum_{j=1}^{\infty}P\left\{\sup_{g\in\mathcal{G}_{n,j,\alpha}} \left[\mathbb{E}[g(Z_{0})]-\frac{1}{n}\sum_{i=1}^{n}g(Z_{i})\right]>\frac{2^{j }\alpha}{2}\right\}\] \[\leq C_{3}\log n\sum_{j=1}^{\infty}\exp\left(2^{j}\alpha\left( \frac{2}{\lambda_{n}}(L_{n}+1)\log\left(16K_{n}(L_{n}+1)(N_{n}+1)B_{n}\right)- \frac{n^{2}}{32nC}\right)\right). \tag{7.23}\] We can see that for n large enough \[\delta_{n,2}\coloneqq\frac{2}{\lambda_{n}}(L_{n}+1)\log\left(16K_{n}(L_{n}+1) (N_{n}+1)B_{n}\right)\leq\frac{n^{2}/16}{4nC}.\] Thus, \[\sum_{j=1}^{\infty}P\Big{\{}\sup_{g\in\mathcal{G}_{n,j,\alpha}}\left[\mathbb{ E}[g(Z_{0})]-\frac{1}{n}\sum_{i=1}^{n}g(Z_{i})\right]>\frac{2^{j}\alpha}{2} \Big{\}}\leq C_{3}\log n\sum_{j=1}^{\infty}\exp\left(-2^{j}\frac{n}{64C} \alpha\right). \tag{7.24}\] Set \(\beta_{n}^{{}^{\prime}}=\frac{n}{64C}\) using similar arguments as (7.18) we have \[P(A_{2,n}>\alpha)\lesssim C_{3}\log n\sum_{j=1}^{\infty}\exp\left(-2^{j}\frac {n}{64C}\alpha\right)\lesssim C_{3}\log n\exp\left(-\frac{n}{64C}\alpha\right),\] for \(1\leq\alpha\leq\alpha_{n}\). Which implies \[\int_{1}^{\alpha_{n}}P(A_{2,n}>\alpha) \leq C_{3}\log n\int_{1}^{\alpha_{n}}\exp\left(-\frac{n}{64C}\alpha \right)d\alpha\] \[\leq\frac{64CC_{3}\log n}{n}\exp\left(-\frac{n}{64C}\times\frac{8 }{n}\left(\frac{nC}{\log(\frac{n}{n^{1/4}C})}\right)^{1/\nu}\right)-\frac{64CC_ {3}\log n}{n}\exp\left(-\frac{n}{64C}\right).\] **Step3**: \(0<\alpha\leq 1<\alpha_{n}\). Recall the the assumption \[\tau_{n}\leq\frac{\beta_{n}}{16K_{n}(L_{n}+1)((N_{n}+1)B_{n})^{L_{n}+1}},\] with the conditions \(\beta_{n}:=(\log n)^{\nu_{5}}/n^{\nu_{6}}\) for some \(\nu_{5},\nu_{6}>0\) and \[\Big{(}\nu_{6}<1/2,\nu_{4}+\nu_{6}<1\Big{)}\text{ or }\Big{(}\nu_{6}<1/2,\nu_{4} +\nu_{6}=1,\nu_{5}>1-\nu_{3}\Big{)}. \tag{7.25}\] We have for all \(\alpha\in(\beta_{n},1)\), \[\sum_{j=1}^{\infty}\exp\left(2\frac{2^{j}\alpha}{\lambda_{n}}(L_ {n}+1)\log\left(\frac{(L_{n}+1)(N_{n}+1)B_{n}}{\frac{2^{j}\alpha}{ \lambda_{n}}-\tau_{n}(L_{n}+1)((N_{n}+1)B_{n})^{L_{n}+1}}\right)-\frac{(n2^{j} \alpha)^{2}/16}{nC+\log(\frac{n}{n^{1/4}C})(n2^{j}\alpha/8)^{\nu}}\right)\] \[\leq\sum_{j=1}^{\infty}\exp\left(\frac{2(L_{n}+1)2^{j}\alpha}{ \lambda_{n}}\log\left(\frac{16K_{n}(L_{n}+1)(N_{n}+1)B_{n}}{\beta_{n}}\right)- \frac{(n2^{j}\alpha)^{2}}{32nC}\right)\] \[\leq\sum_{j=1}^{\infty}\exp\bigg{[}-(2^{j})^{2}\bigg{(}\frac{n \alpha^{2}}{32C}-\frac{2(L_{n}+1)\alpha}{\lambda_{n}}\log\left(\frac{16K_{n}( L_{n}+1)(N_{n}+1)B_{n}}{\beta_{n}}\right)\bigg{)}\bigg{]}\] \[\leq\sum_{j=1}^{\infty}\exp\big{(}-4^{j}\phi_{n}(\alpha)\big{)} \tag{7.26}\] with \[\phi_{n}(\alpha)\coloneqq\frac{n\alpha^{2}}{32C}-\frac{2(L_{n}+1)\alpha}{ \lambda_{n}}\log\left(\frac{16K_{n}(L_{n}+1)(N_{n}+1)B_{n}}{\beta_{n}}\right) \text{ for all }\beta_{n}<\alpha\leq 1<\alpha_{n}. \tag{7.27}\] Since \(K_{n},L_{n},N_{n},B_{n}\geq 1\), we have \(16K_{n}(L_{n}+1)(N_{n}+1)B_{n}/\beta_{n}>1\) and therefore, \[\phi_{n}(\alpha)\geq\frac{n\beta_{n}^{2}}{32C}-\frac{2(L_{n}+1)\beta_{n}}{ \lambda_{n}}\log\left(\frac{16K_{n}(L_{n}+1)(N_{n}+1)B_{n}}{\beta_{n}}\right) =\phi_{n}(\beta_{n}),\text{ \ for all }\beta_{n}<\alpha\leq 1<\alpha_{n}. \tag{7.28}\] With the assumptions \(\lambda_{n}\asymp(\log n)^{\nu_{3}}/n^{\nu_{4}}\) for some \(0<\nu_{4}<1\), \(\nu_{6}<1/2\) and (7.25), we get, \[\varphi_{n}\coloneqq\phi_{n}(\beta_{n})\underset{n\to\infty}{\longrightarrow}\infty. \tag{7.29}\] One can see that, \[\exp(-\beta 4^{j})\leq\exp(-\beta)\times\exp(-\beta j),\text{for all }\beta\geq 0,\ j \in\mathbb{N}.\] Hence, it follows from (7.26), (7.28) and (7.29) that, for all \(\alpha\in(\beta_{n},1)\), and for \(n\) large enough, \[\sum_{j=1}^{\infty}\exp\left(2\frac{2^{j}\alpha}{\lambda_{n}}(L_{n} +1)\log\left(\frac{16K_{n}(L_{n}+1)(N_{n}+1)B_{n}}{2^{j}\alpha}\right)-\frac{(n 2^{j}\alpha)^{2}/16}{nC+\log(\frac{n}{n^{1/4}C})(n2^{j}\alpha/8)^{\nu}}\right)\] \[\leq\sum_{j=1}^{\infty}\exp\big{(}-4^{j}\varphi_{n}\big{)}\leq \sum_{j=1}^{\infty}\exp(-\varphi_{n})\exp(-\varphi_{n}j)\leq\frac{\exp(-2 \varphi_{n})}{1-\exp(-\varphi_{n})}\leq 2\exp(-2\varphi_{n}).\] Therefore, for sufficiently large \(n\), \[\int_{0}^{1}P(A_{2,n}>\alpha)d\alpha\leq\beta_{n}+\int_{\beta_{n}}^{1}P(A_{2,n }>\alpha)d\alpha\leq\beta_{n}+2\exp(-2\varphi_{n})\leq 2\beta_{n},\] where \(\beta_{n}\) is defined above, and satisfies (7.25). Thus, for \(\nu_{5}>0,\ \nu_{6}<1/2\), we have \[\mathbb{E}[A_{2,n}] =\int_{0}^{\infty}P(A_{2,n}>\alpha)d\alpha\] \[\leq\int_{0}^{\beta_{n}}P(A_{2,n}>\alpha)d\alpha+\int_{\beta_{n}} ^{1}P(A_{2,n}>\alpha)d\alpha+\int_{\alpha_{n}}^{\alpha_{n}}P(A_{2,n}>\alpha)d\alpha +\int_{\alpha_{n}}^{\infty}P(A_{2,n}>\alpha)d\alpha\] \[\leq 2\beta_{n}+\frac{64CC_{3}\log n}{n}\exp\left(-\frac{n}{64C} \times\frac{8}{n}\left(\frac{nC}{\log(\frac{n}{n^{1/4}C})}\right)^{1/\nu} \right)-\frac{64CC_{3}\log n}{n}\exp\left(-\frac{n}{64C}\right)\] \[+32C_{3}\log n\frac{\log(\frac{n}{n^{1/4}C})}{n^{2-\nu}}\exp \left(-\frac{n^{2-\nu}}{32\log(\frac{n}{n^{1/4}C})/8^{\nu}}\times\frac{8}{n} \left(\frac{nC}{\log\frac{n}{n^{1/4}C}}\right)\right)\] \[\lesssim 2\beta_{n}. \tag{7.30}\] For \(A_{4,n}\) we choose a neural network function \(h_{n}^{0}\in\mathcal{H}_{\sigma}(L_{n},N_{n},B_{n},F)\) such that \[\|h_{n}^{0}-h^{*}\|_{2,P_{X_{0}}}^{2}+J_{\lambda_{n},\tau_{n}}(h_{n}^{0})\leq \inf_{h\in\mathcal{H}_{\sigma}(L_{n},N_{n},B_{n},F)}\Big{[}\|h-h^{*}\|_{2,P_{X _{0}}}^{2}+J_{\lambda_{n},\tau_{n}}(h)\Big{]}+n^{-1}.\] Then by the basic inequality \[\frac{1}{n}\sum_{i=1}^{n}\big{(}Y_{i}-\widehat{h}_{n}(X_{i})\big{)}^{2}+J_{ \lambda_{n},\tau_{n}}(\widehat{h}_{n})\leq\frac{1}{n}\sum_{i=1}^{n}\left(Y_{i }-h(X_{i})\right)^{2}+J_{\lambda_{n},\tau_{n}}(h),\] for any \(h\in\mathcal{H}_{\sigma}(L_{n},N_{n},B_{n},F)\), we have \[A_{4,n} =2\Big{[}\frac{1}{n}\sum_{i=1}^{n}(Y_{i}-\widehat{h}_{n}(X_{i}))^ {2}-\frac{1}{n}\sum_{i=1}^{n}(Y_{i}-h_{n}^{0}(X_{i}))^{2}\Big{]}+2J_{\lambda_{n },\tau_{n}}(\widehat{h}_{n})\] \[+2\Big{[}\frac{1}{n}\sum_{i=1}^{n}(Y_{i}-h_{n}^{0}(X_{i}))^{2}- \frac{1}{n}\sum_{i=1}^{n}(Y_{i}-h^{*}(X_{i}))^{2}\Big{]} \tag{7.31}\] \[\leq 2J_{\lambda_{n},\tau_{n}}(h_{n}^{0})+2\Big{[}\frac{1}{n}\sum_ {i=1}^{n}(Y_{i}-h_{n}^{0}(X_{i}))^{2}-\frac{1}{n}\sum_{i=1}^{n}(Y_{i}-h^{*}(X_ {i}))^{2}\Big{]}\] and we have \[\mathbb{E}[A_{4,n}]\leq 2J_{\lambda_{n},\tau_{n}}(h_{n}^{0})+\frac{2}{n}\sum_ {i=1}^{n}\Big{[}\mathbb{E}(Y_{i}-h_{n}^{0}(X_{i}))^{2}-\mathbb{E}(Y_{i}-h^{*}(X _{i}))^{2}\Big{]},\] One can see that for \(i=1,\ldots,n\), \[\mathbb{E}[(Y_{i}-h_{n}^{0}(X_{i}))^{2}]-\mathbb{E}[(Y_{i}-h^{*}(X_{i}))^{2}] \leq\|h_{n}^{0}-h^{*}\|_{2,P_{X_{0}}}^{2}.\] Thus, \[\mathbb{E}[A_{4,n}]\leq 2J_{\lambda_{n},\tau_{n}}(h_{n}^{0})+2\|h_{n}^{0}-h^{*} \|_{2,P_{X_{0}}}^{2}\quad\leq 2\inf_{h\in\mathcal{H}_{\sigma}(L_{n},N_{n},B_{n},F)} \Bigl{[}\|h-h^{*}\|_{2,P_{X_{0}}}^{2}+J_{\lambda_{n},\tau_{n}}(h)\Bigr{]}+2n^{-1}.\] Thus, \[\mathbb{E}\Bigl{[}\|\widehat{h}_{n}-h^{*}\|_{2,P_{X_{0}}}^{2} \Bigr{]}=\mathbb{E}\left[\sum_{i=1}^{4}A_{i,n}\right]\lesssim 3\frac{\log n}{n}+2 \frac{(\log n)^{\nu_{5}}}{n^{\nu_{6}}}+2\inf_{h\in\mathcal{H}_{\sigma}(L_{n},N _{n},B_{n},F)}\Bigl{[}\|h-h^{*}\|_{2,P_{X_{0}}}^{2}\Bigr{]}+2n^{-1}\] \[\lesssim 2\Biggl{(}\inf_{h\in\mathcal{H}_{\sigma}(L_{n},N_{n},B_{n},F)}\{\|h-h^{*}\|_{2,P_{X_{0}}}^{2}+\lambda_{n}\|\theta(h)\|_{\mathrm{clip}, \tau_{n}}\}\vee\frac{(\log n)^{\nu_{5}}}{n^{\nu_{6}}}\Biggr{)}. \tag{7.32}\] This completes the proof of the theorem. ### Proof of Theorem 4.2 Let \(\epsilon_{n}=n^{-\nu_{4}/(\kappa+2)}\). From the Theorem (4.1), the assumption (4.6) and the fact that \(\|\theta(h)\|_{\mathrm{clip},\tau}\leq\|\theta(h)\|_{0}\) for any \(\tau>0\), we have that for any \(h^{*}\in\mathcal{H}^{*}\), \[\sup_{h\in\mathcal{H}_{\rho,H^{*}}:h^{*}\in\mathcal{H}^{*}} \mathbb{E}\Bigl{[}\|\widehat{h}_{n}-h^{*}\|_{2,P_{X_{0}}}^{2}\Bigr{]}\] \[\lesssim\sup_{h\in\mathcal{H}_{\rho,H^{*}}:h^{*}\in\mathcal{H}^{*} }\inf_{h\in\mathcal{H}_{\sigma}(L_{n},N_{n},B_{n},F,C\epsilon_{n}^{-\kappa}( \log n)^{\tau})}\Bigl{\{}\|h-h^{*}\|_{2,P_{X_{0}}}^{2}+\lambda_{n}\|\theta(h) \|_{\mathrm{clip},\tau_{n}}\Bigr{\}}\vee\frac{(\log n)^{\nu_{5}}}{n^{\nu_{6}}}\] \[\lesssim\sup_{h\in\mathcal{H}_{\rho,H^{*}}:h^{*}\in\mathcal{H}^{*} }\Bigl{[}\inf_{h\in\mathcal{H}_{\sigma}(L_{n},N_{n},B_{n},F,C\epsilon_{n}^{- \kappa}(\log n)^{\tau})}\|h-h^{*}\|_{2,P_{X_{0}}}^{2}+\lambda_{n}\epsilon_{n}^ {-\kappa}(\log n)^{\tau}\Bigr{]}\vee\frac{(\log n)^{\nu_{5}}}{n^{\nu_{6}}}\] \[\lesssim\sup_{h\in\mathcal{H}_{\rho,H^{*}}:h^{*}\in\mathcal{H}^{* }}\Bigl{[}\epsilon_{n}^{2}+\frac{(\log n)^{\nu_{3}}}{n^{\nu_{4}}}\epsilon_{n}^ {-\kappa}(\log n)^{\tau}\Bigr{]}\vee\frac{(\log n)^{\nu_{5}}}{n^{\nu_{6}}}\] \[\lesssim\frac{(\log n)^{\tau+\nu_{3}}}{n^{2\nu_{4}/(\kappa+2)}} \vee\frac{(\log n)^{\nu_{5}}}{n^{\nu_{6}}}. \tag{7.33}\] Thus, the theorem follows. ### Proof of Theorem 5.1 Let us decompose \(\mathcal{E}_{Z_{0}}(\widehat{h}_{n})\) as as follows. \[\mathcal{E}_{Z_{0}}(\widehat{h}_{n})=\mathbb{E}[\ell(Y_{0}\widehat{h}_{n}(X_{0 }))]-\mathbb{E}[\ell(Y_{0}h_{\ell}^{*}(X_{0}))]\coloneqq B_{1,n}+B_{2,n}, \tag{7.34}\] where \[B_{1,n} =\Bigl{[}\mathbb{E}[\ell(Y_{0}\widehat{h}_{n}(X_{0}))]-\mathbb{E} [\ell(Y_{0}h_{\ell}^{*}(X_{0}))]\Bigr{]}-\frac{2}{n}\sum_{i=1}^{n}\Big{[}\ell(Y _{i}\widehat{h}_{n}(X_{i}))-\ell(Y_{i}h_{\ell}^{*}(X_{i}))\Bigr{]}-2J_{\lambda _{n},\tau_{n}}(\widehat{h}_{n});\] \[B_{2,n} =\frac{2}{n}\sum_{i=1}^{n}\Big{[}\ell(Y_{i}\widehat{h}_{n}(X_{i}) )-\ell(Y_{i}h_{\ell}^{*}(X_{i}))\Bigr{]}+2J_{\lambda_{n},\tau_{n}}(\widehat{h}_ {n}).\] One can bound \(B_{1,n}\) in the same way as for \(A_{2,n}\) in the proof of Theorem 4.1. Let \(\Delta(h)(Z_{0})\coloneqq\ell(Y_{0}h(X_{0}))-\ell(Y_{0}h_{\ell}^{*}(X_{0}))\), with \(Z_{0}\coloneqq(X_{0},Y_{0})\). Set \[\mathcal{H}_{n,j,\alpha}\coloneqq\{h\in\mathcal{H}_{\sigma}(L_{n},N_{n},B_{n},F) :2^{j-1}\mathds{1}(j\neq 0)\alpha\leq J_{\lambda_{n},\tau_{n}}(h)\leq 2^{j}\alpha\},\] and \[\mathcal{G}_{n,j,\alpha}\coloneqq\Big{\{}\Delta(h):\mathbb{R}^{d}\times\{-1,1\} \mapsto\mathbb{R}:h\in\mathcal{H}_{n,j,\alpha}\Big{\}}. \tag{7.35}\] It holds from the definition of \(h_{\ell}^{*}\) that \(\mathbb{E}\Delta(h)(Z_{0})\geq 0\). We have for all \(\alpha>0\), \[P(B_{1,n}>\alpha) \leq P\Big{(}\sup_{h\in\mathcal{H}_{\varkappa}(L_{n},N_{n},B_{n},F )}\frac{\mathbb{E}\Delta(h)(Z_{0})-\frac{1}{n}\sum_{i=1}^{n}\Delta(h)(Z_{i})}{ \alpha+2J_{\lambda_{n},\tau_{n}}(h)+\mathbb{E}\Delta(h)(Z_{0})}\geq\frac{1}{2} \Big{)}\] \[\leq\sum_{j=0}^{\infty}P\Big{(}\sup_{h\in\mathcal{H}_{n,j,\alpha} }\frac{\mathbb{E}\Delta(h)(Z_{0})-\frac{1}{n}\sum_{i=1}^{n}\Delta(h)(Z_{i})}{ 2^{j}\alpha+\mathbb{E}\Delta(h)(Z_{0})}\geq\frac{1}{2}\Big{)}\] \[\leq\sum_{j=0}^{\infty}P\Big{(}\sup_{g\in\mathcal{G}_{n,j,\alpha }}\frac{\mathbb{E}g(Z_{0})-\frac{1}{n}\sum_{i=1}^{n}g(Z_{i})}{2^{j}\alpha} \geq\frac{1}{2}\Big{)}\leq\sum_{j=0}^{\infty}P\Big{(}\sup_{g\in\mathcal{G}_{n,j,\alpha}}\mathbb{E}g(Z_{0})-\frac{1}{n}\sum_{i=1}^{n}g(Z_{i})\geq\frac{2^{j} \alpha}{2}\Big{)}\] \[\leq\sum_{j=0}^{\infty}P\Big{(}\sup_{g\in\mathcal{G}_{n,j,\alpha }}\sum_{i=1}^{n}\Big{(}\mathbb{E}g(Z_{0})-g(Z_{i})\Big{)}\geq\frac{2^{j} \alpha n}{2}\Big{)}.\] Let \(\Delta(h_{1}),\ \Delta(h_{2})\in\mathcal{G}_{n,j,\alpha}\) and \((x,y)\in\mathbb{R}^{d}\times\{-1,1\}\), we have \[\|\Delta(h_{1})(x,y)-\Delta(h_{2})(x,y)\|_{\infty}\leq\|\ell(yh_{1}(x))-\ell( yh_{2}(x))\|_{\infty}\leq\mathcal{K}_{\ell}|h_{1}(x)-h_{2}(x)|.\] Thus, \[\mathcal{N}\Big{(}\varepsilon,\mathcal{G}_{n,j,\alpha},\|\cdot\|_{\infty} \Big{)}\leq\mathcal{N}\Big{(}\frac{\varepsilon}{\mathcal{K}_{\ell}},\mathcal{ H}_{n,j,\alpha},\|\cdot\|_{\infty}\Big{)}.\] As stressed in the proof of Theorem 4.1, the process \(\Big{(}g(Z_{t})\coloneqq\ell(h(X_{t}),Y_{t})\Big{)}_{t\in\mathbb{Z}}\) is also \(\psi\)-weakly dependent. From (7.9) and (7.10) we have, \[\mathcal{N}(\varepsilon,\mathcal{G}_{n,j,\alpha},\|\cdot\|_{\infty}) \leq\mathcal{N}\Big{(}\frac{\varepsilon}{\mathcal{K}_{\ell}}, \mathcal{H}_{n,j,\alpha},\|\cdot\|_{\infty}\Big{)}\] \[\leq\mathcal{N}\Big{(}\frac{\varepsilon}{\mathcal{K}_{\ell}}, \mathcal{H}_{\sigma}(L_{n},N_{n},B_{n},F,\frac{2^{j}\alpha}{\lambda_{n}}),\| \cdot\|_{\infty}\Big{)}\] \[\leq\exp\Big{(}2\frac{2^{j}\alpha}{\lambda_{n}}(L_{n}+1)\log \Big{(}\frac{(L_{n}+1)(N_{n}+1)B_{n}}{\varepsilon/\mathcal{K}_{\ell}-\tau_{n}( L_{n}+1)((N_{n}+1)B_{n})^{L_{n}+1}}\Big{)}\Big{)}. \tag{7.36}\] Let \(\varepsilon\coloneqq\frac{2^{j}\alpha}{2}\). By using similar arguments as in (7.14). We have \[\sum_{j=1}^{\infty}P\Big{\{}\sup_{g\in\mathcal{G}_{n,j,\alpha}} \left[\mathbb{E}[g(Z_{0})]-\frac{1}{n}\sum_{i=1}^{n}g(Z_{i})\right]>\frac{2^{j }\alpha}{2}\Big{\}}\] \[\leq C_{3}\exp(\log\log n)\] \[\times\sum_{j=1}^{\infty}\exp\left(\ 2\frac{2^{j}\alpha}{\lambda_{n}}(L_{n}+1)\log \left(\frac{(L_{n}+1)(N_{n}+1)B_{n}}{\frac{2^{j}\alpha}{2\mathcal{K}_{\ell}}- \tau_{n}(L_{n}+1)((N_{n}+1)B_{n})^{L_{n}+1}}\right)-\frac{(n2^{j}\alpha)^{2}/ 16}{nC+\log(\frac{n}{n^{1/4}\mathcal{C}})(n2^{j}\alpha/8)^{\nu}}\right).\] With the assumptions \(L_{n}\lesssim\log n,\ N_{n}\lesssim n^{\nu_{1}},\ B_{n}\lesssim n^{\nu_{2}}, \ \tau_{n}\leq\frac{\beta_{n}}{4\mathcal{K}_{\ell}(L_{n}+1)((N_{n}+1)B_{n})^{L_{n}+ 1}}\); under the conditions on \(\nu_{1}>0,\ \nu_{2}>0,\ \beta_{n},\nu_{5},\ \nu_{6}\), by going as in \(A_{2,n}\) (see the proof of Theorem 4.1), we get, \[\mathbb{E}[B_{1,n}]\lesssim 2\frac{(\log n)^{\nu_{5}}}{n^{\nu_{6}}}.\] Let us deal now with \(B_{2,n}\). We choose a neural network function \(h_{n}^{\circ}\in\mathcal{H}_{\sigma}(L_{n},N_{n},B_{n},F)\) such that \[\mathcal{E}_{Z_{0}}(h_{n}^{\circ})+J_{\lambda_{n},\tau_{n}}(h_{n}^{\circ})\leq \inf_{h\in\mathcal{H}_{\sigma}(L_{n},N_{n},B_{n},F)}\Bigl{[}\mathcal{E}_{Z_{0} }(h)+J_{\lambda_{n},\tau_{n}}(h)\Bigr{]}+\frac{1}{n}. \tag{7.37}\] Then, from the basic inequality, \[\frac{1}{n}\sum_{i=1}^{n}\ell(Y_{i}\widehat{h}_{n}(X_{i}))+J_{\lambda_{n}, \tau_{n}}(\widehat{h}_{n})\leq\frac{1}{n}\sum_{i=1}^{n}\ell(Y_{i}h(X_{i}))+J_{ \lambda_{n},\tau_{n}}(h),\] we have, \[B_{2,n} =\frac{2}{n}\sum_{i=1}^{n}\left[\ell(Y_{i}\widehat{h}_{n}(X_{i}) )-\ell(Y_{i}h_{n}^{\circ}(X_{i}))\right]+2J_{\lambda_{n},\tau_{n}}(\widehat{h }_{n}))+\frac{2}{n}\Bigl{[}\ell(Y_{i}h_{n}^{\circ}(X_{i}))-\ell(Y_{i}h_{\ell}^ {*}(X_{i}))\Bigr{]}\] \[\leq 2J_{\lambda_{n},\tau_{n}}(h_{n}^{\circ})+\frac{2}{n}\Bigl{[} \ell(Y_{i}h_{n}^{\circ}(X_{i}))-\ell(Y_{i}h_{\ell}^{*}(X_{i}))\Bigr{]}\] \[\leq 2\inf_{h\in\mathcal{H}_{\sigma}(L_{n},N_{n},B_{n},F)} \Bigl{[}\mathcal{E}_{Z_{0}}(h)+J_{\lambda_{n},\tau_{n}}(h)\Bigr{]}+\frac{1}{n}.\] Hence \[\mathbb{E}\Bigl{[}\mathcal{E}_{Z_{0}}(\widehat{h}_{n})\Bigr{]}\leq 2\inf_{h\in \mathcal{H}_{\sigma}(L_{n},N_{n},B_{n},F)}\Bigl{\{}\mathcal{E}_{Z_{0}}(h)+ \lambda_{n}\|\theta(h)\|_{\mathrm{clip},\tau_{n}}\Bigr{\}}\vee\frac{(\log n)^ {\nu_{5}}}{n^{\nu_{6}}}.\] This establishes the theorem. ### Proof of Theorem 5.2 Under the assumption \(\ell\) is \(\mathcal{K}_{\ell}-\mathrm{Lipschitz}\), we have \[|\mathcal{E}_{z_{0}}(h)| =|\mathbb{E}[\ell(Y_{0}h(X_{0}))]-\mathbb{E}[\ell(Y_{0}h_{\ell}^{ *}(X_{0}))]|\] \[\leq\mathbb{E}[|\ell(h(X_{0}))-\ell(h_{\ell}^{*}(X_{0}))|]\] \[\leq K_{\ell}\mathbb{E}[|h(X_{0})-h_{\ell}^{*}(X_{0})|]\] Thus, \[\mathcal{E}_{Z_{0}}(h)\leq\mathcal{K}_{\ell}\|h-h_{\ell}^{*}\|_{1,P_{X_{0}}}.\] Let \(\epsilon_{n}=n^{-\dfrac{\nu_{4}}{\kappa+1}}\). By Theorem 5.1, the condition (5.6), and the fact that \(\|\theta(h)\|_{\mathrm{clip},\tau}\leq\|\theta\|_{0}\) for any \(\tau>0\), we have that for any \(h_{\ell}^{*}\in\mathcal{H}^{*}\), \[\mathbb{E}\Bigl{[}\mathcal{E}_{Z_{0}}(\widehat{h}_{n})\Bigr{]} \lesssim\inf_{h\in\mathcal{H}_{\sigma}(L_{n},N_{n},B_{n},F,C\epsilon _{n}^{-\kappa}(\log n)^{r})}\Bigl{\{}\mathcal{E}_{Z_{0}}(h)+\lambda_{n}\| \theta(h)\|_{\mathrm{clip},\tau_{n}}\Bigr{\}}\vee\frac{(\log n)^{\nu_{5}}}{n^{ \nu_{6}}}\] \[\lesssim\Bigl{[}\inf_{h\in\mathcal{H}_{\sigma}(L_{n},N_{n},B_{n},F,C\epsilon_{n}^{-\kappa}(\log n)^{r})}\|h-h_{\ell}^{*}\|_{1,P_{X_{0}}}+\lambda _{n}\epsilon_{n}^{-\kappa}(\log n)^{r}\Bigr{]}\vee\frac{(\log n)^{\nu_{5}}}{n^{ \nu_{6}}}\] \[\lesssim\frac{(\log n)^{r+\nu_{3}}}{n\dfrac{\nu_{4}}{\kappa+1}} \vee\frac{(\log n)^{\nu_{5}}}{n^{\nu_{6}}}.\] Thus, \[\sup_{H\in\mathcal{O}_{n}^{*}:h_{\ell}^{*}\in\mathcal{H}^{*}}\mathbb{E}\Bigl{[} \mathcal{E}_{Z_{0}}(\widehat{h}_{n})\Bigr{]}\lesssim\frac{(\log n)^{r+\nu_{3}} }{n\dfrac{\nu_{4}}{\kappa+1}}\vee\frac{(\log n)^{\nu_{5}}}{n^{\nu_{6}}}. \tag{7.38}\]
2303.15301
PeakNet: An Autonomous Bragg Peak Finder with Deep Neural Networks
Serial crystallography at X-ray free electron laser (XFEL) and synchrotron facilities has experienced tremendous progress in recent times enabling novel scientific investigations into macromolecular structures and molecular processes. However, these experiments generate a significant amount of data posing computational challenges in data reduction and real-time feedback. Bragg peak finding algorithm is used to identify useful images and also provide real-time feedback about hit-rate and resolution. Shot-to-shot intensity fluctuations and strong background scattering from buffer solution, injection nozzle and other shielding materials make this a time-consuming optimization problem. Here, we present PeakNet, an autonomous Bragg peak finder that utilizes deep neural networks. The development of this system 1) eliminates the need for manual algorithm parameter tuning, 2) reduces false-positive peaks by adjusting to shot-to-shot variations in strong background scattering in real-time, 3) eliminates the laborious task of manually creating bad pixel masks and the need to store these masks per event since these can be regenerated on demand. PeakNet also exhibits exceptional runtime efficiency, processing a 1920-by-1920 pixel image around 90 ms on an NVIDIA 1080 Ti GPU, with the potential for further enhancements through parallelized analysis or GPU stream processing. PeakNet is well-suited for expert-level real-time serial crystallography data analysis at high data rates.
Cong Wang, Po-Nan Li, Jana Thayer, Chun Hong Yoon
2023-03-24T07:34:09Z
http://arxiv.org/abs/2303.15301v3
# PeakNet: Bragg peak finding in X-ray crystallography experiments with U-Net ###### Abstract Serial crystallography at X-ray free electron laser (XFEL) sources has experienced tremendous progress in achieving high data rate in recent times. While this development offers potential to enable novel scientific investigations, such as imaging molecular events at logarithmic timescales, it also poses challenges in regards to real-time data analysis, which involves some degree of data reduction to only save those features or images pertaining to the science on disks. If data reduction is not effective, it could directly result in a substantial increase in facility budgetary requirements, or even hinder the utilization of ultra-high repetition imaging techniques making data analysis unwieldy. Furthermore, an additional challenge involves providing real-time feedback to users derived from real-time data analysis. In the context of serial crystallography, the initial and critical step in real-time data analysis is finding X-ray Bragg peaks from diffraction images. To tackle this challenge, we present PeakNet, a Bragg peak finder that utilizes neural networks and runs about four times faster than Foscake peak finder, while delivering significantly better indexing rates and comparable number of indexed events. We formulated the task of peak finding into a semantic segmentation problem, which is implemented as a classical U-Net architecture. A key advantage of PeakNet is its ability to scale linearly with respect to data volume, making it well-suited for real-time serial crystallography data analysis at high data rates. ## I Introduction Serial femtosecond crystallography (SFX) with X-ray free electron lasers (XFEL) has enabled structural determination of macromolecules by outrunning radiation damage at room temperature, an approach commonly known as _diffraction-before-destruction_[1, 15, 14]. Time-resolved serial femtosecond crystallography (TR-SFX) at XFEL facilities, like the Linac Coherent Light Source (LCLS), allows the investigation of biochemical reactions with picosecond time resolution which has led to many significant structural studies [11, 13, 12, 14, 15, 16, 17, 18] since the first published work at LCLS [1]. In the meantime, the data rate at XFEL facilities have also gradually increased from hundreds of Hz to the MHz range, with the European X-ray Free Electron Laser (EuXFEL) being the first MHz facility available to users and LCLS-II set to launch in 2023. At the MHz data rate, novel real-time data analysis approaches are needed to tackle two major challenges that arise in SFX experiments, namely data reduction and real time feedback. Firstly, data reduction aims to veto, extract and compress scientific data by identifying important events, commonly referred to as "hits", and vetoing irrelevant ones from terabyte per second data streams. One typical approach for SFX hit finding utilizes existing Bragg peak finders, such as the Cheetah peak finder [1], the robust peak finder [1], DIALS peak finder [15] or the Poscake peak finder [14]. Additionally, a neural network-based SFX hit finder was reported to achieved a processing rate of 1.3 kHz on a single GPU [17]. ORB feature extractor combined with a multilayer perceptron (MLP) hit classifier showed promising results for SFX data [10]. Secondly, real-time feedback in SFX experiments provides users with information on the experimental conditions and allows them to make real-time decisions that can optimize the data acquisition process. For example, it is critical to fine-tune the hit rate to maximize the usage of collected data; it is also important to know the current diffraction limit, crystalline mosaicity, reciprocal space coverage, and other relevant factors over time. Therefore, hit finding alone is insufficient to provide the necessary insights, making real-time Bragg peak finding crucial in delivering fast and comprehensive feedback. The previously mentioned peak finder in Poscake runs at roughly 5 Hz on a data center CPU, which may result in substantial cost when attempting to scale up the process. In this work, we present PeakNet that addresses the problem of high data rate peak finding by converting it into the task of peak segmentation using neural networks. To accomplish this goal, we propose to train a neural network model to classify each individual pixel as either belonging to a peak or not belonging to a peak. This is followed by converting a connected area of pixels as a single peak, a task that can be readily accomplished through connected component analysis [22]. Subsequently, the peak coordinates will be determined by computing the center of mass of all the pixels that belong to the identified peak. It is worth noting that the entire peak finding process takes place on GPUs, offering a fully parallelizable approach for achieving scalability. Concretely, our contributions can be summarized as follows: * We introduce a deep neural network-based Bragg peak finder, named PeakNet, which can effectively and efficiently segment hundreds or even thousands of Bragg peaks with various peak profiles in detector images of any dimension. Despite being trained on peaks labeled by an existing peak finder, we highlight that PeakNet's capability is not limited by the peak finder used in labeling. * We demonstrate the efficacy of PeakNet by evaluating its performance on large multi-panel (\(1739\times 1748\)) and single-panel (\(1920\times 1920\)) detectors. It significantly outperformed existing methods in terms of indexing rate, while delivering comparable number of indexed events. We also fine-tuned the underlying U-Net architecture of PeakNet to strike an optimal balance between model efficacy and runtime performance, leading to a four times out-of-the-box processing speed improvement over Posocake peak finder. ## II Related work Reliable and automatic peak finding algorithms have undergone multiple iterations of development. An early approach involved utilizing template matching [23] based on libraries of peak shapes, but this method was found to be slow and inefficient during runtime, especially when dealing with peaks with low signal-to-noise ratios (SNR). The main obstacle lies in the localization of peaks and the tuning of user parameters for pattern matching, compounded by the difficult task of developing a comprehensive library of peak shapes. Another influential approach employed region-growing techniques [1, 1] to detect pixel areas with high skewness. In the context of serial crystallography with XFEL sources, the efficacy of this method was diminished due to the lack of sufficient data to provide reliable statistics, rendering it susceptible to outliers and less adept in analyzing weak peaks at higher scattering angles, which are essential for achieving atomic resolution in structure determination. While template matching and region-growing techniques require users to provide prior information like threshold values, Robust Peak Finder (RPF) [10] has demonstrated proficient peak finding capabilities with minimal user input. This technique utilizes the Modified Selective Statistical Estimator (MSSE) method to segment between background pixels and actual peaks. Furthermore, the RPF method can also parallelize the processing of diffraction patterns, making it suitable for real-time data analysis. of feature channels doubles with respect to its previous contracting layer. Likewise, a single expanding path contains a stride-two \(2\times 2\) transposed convolutional layers concatenated with the corresponding feature map from the contracting path. This is followed by two stride-one \(3\times 3\) convolutional layers connected to a ReLU activation function. It is worth noting that the feature map cropping may be necessary during the concatenation between feature maps from two paths. In the last layer, all 8 channels will be mapped to a single channel through a \(1\times 1\) convolution. ### _The loss function: focal loss_ A main technical issue in training U-Net for Bragg peak detection is the extreme peak-background class imbalance (e.g., \(>10^{3}:1\)), resulting in reduced prediction accuracy. This may not pose as a problem in previous peak finders like BraggNet, which operates on a much smaller area of \(32\times 32\), whereas PeakNet is trained on larger multi-panel detectors, such as the Cornell-SLAC pixel array detector (CSPAD) [1] with a dimension of \(194\times 185\) for one of its panels. Focal loss was introduced to address the extreme class imbalance problem for object detectors [10]. It is defined as: \[\text{FL}(p)=-\alpha\cdot(1-p)^{\gamma}\cdot\log p. \tag{1}\] Here, \(p\) is the probabilities of a pixel \(x\) being a peak. \(\gamma\) controls a modulating factor \((1-p)\) and \(\alpha\) is an additional modulating factor. We used \(\gamma=2.0\) and \(\alpha=1.2\) in PeakNet. In practice, numerical instability will often emerge in the calculation of \(p=e^{-\text{logit}}\), where \(\text{logit}=\log{(1+e^{-x})}\). To bypass this issue, logit needs to be rewritten as \(\text{logit}=a+\log{(e^{-a}+e^{-x-a})}\), where \(a=max(0,-x)\). As a result, if we denote the label of a pixel \(x\) as \(y\), the focal loss becomes \[\text{FL}(p)=\alpha\cdot y\cdot(1-p)^{\gamma}+\text{logit}+(1-y)\cdot p^{ \gamma}\cdot x+p^{\gamma}\cdot(1-y)\cdot\text{logit} \tag{2}\] ## IV Experiments ### _PeakNet on multi-panel detectors: Trained on panels but predicting on assembled images._ We demonstrated the possibility of PeakNet making prediction on assembled images from a multi-panel detector despite being trained on individual panels. To facilitate this investigation, we utilized a multi-panel detector CSPAD, which is widely used in serial crystallography experiments at LCLS. Specifically, we used crystallographic data of streptavidin crystals from LCLS experiment _cxic0415_ run _101_ at the Coherent X-ray Imaging (CXI) instrument, where the CSPAD deployed consists of 32 panels. An individual diffraction image recorded on CSPAD are conventionally saved in a stack of 64 mini-panels (half of a physical CSPAD panel). For data labeling, we randomly selected 268 mini-panels, which were labeled using the Posocake peak finder with the default settings. We then split the labeled data into 60% training set and 20% validation set. The remaining 20% data were initially set aside for testing purposes, but were not used due to the lack of an effective metric, which will be discussed later in the evaluation of our model's efficacy. Our test set included the first 5000 events of the run, which facilitated a faster feedback loop compared with going through all events in the run. Notably, we did not apply any image preprocessing, and thus PeakNet Fig. 1: Peak finding process in PeakNet. directly takes in a diffraction image of its original size and outputs a segmentation map. Fig. 3 (a) displays labeled samples, with the input mini-panel image on the left and pixel-wise labeled peaks on the right. The overall peak label is a binary mask that shares the same size as the input, with a value of one indicating a peak pixel and zero otherwise. Consequently, each input consists of a \(185\times 194\) mini-panel from the CSPAD and a binary mask of the same size. Additionally, Fig. 3 (b) shows a sample model prediction on an entire CSPAD image with a size of \(1739\times 1748\). ### _Examining PeakNet's generalizability beyound the peak finder used in labeling_ The current strategy for data labeling involves using an existing peak finder, and it is important to evaluate whether any limitations of the peak finder used in data labeling will affect the peak finding capability of PeakNet. One specific limitation to investigate is the size dependency of the peak finder. Notably, the peak size range is a user-defined input requested by the Poscake peak finder. This works well when diffraction spots have consistent sizes. However, detectors such as the Rayonix have a large point spread function for intense peaks, which may result in large peaks being ignored by the Poscake peak finder. Consequently, there is a concern about whether PeakNet will inherit the same limitations since it is trained on Poscake peak labels. Fig. 4 presents a side-by-side comparison of peak detection on a Rayonix image, with the PeakNet results displayed on the left and the Poscake results on the right. The diffraction data were collected from crystals of SARS-CoV-2 proteases using the Macromolecular Femtosecond Crystallography (MFX) instrument [15] at LCLS. Upon visual inspection of Fig. 4 (c) and (d), both PeakNet and Poscake were observed to detect a considerable number of peaks, suggesting a potentially comparable performance between the two methods. However, upon closer inspection of the view near the area slightly below the beamstop position, one obvious, reasonably sized Bragg peak was observed but not detected by the Poscake peak finder, whereas PeakNet successfully located this peak. This finding provides direct evidence that PeakNet's peak finding capability is not strictly limited by the peak finder used in the data labeling process. In fact, PeakNet appears to possess its own understanding of Bragg peaks, as evidenced by the exclusion of a weak single Bragg peak-like spot near the upper edge of the image, as shown in Fig. 4 (a) and (b). Therefore, it is important to note that PeakNet is not merely a neural network-based replica of the Poscake peak finder despite being trained on labels supplied by it. ### _Model efficacy and runtime performance_ We consider model efficacy of a peak finder in two metrics: (1) indexing rate and (2) the number of indexed events. Table I provides a comparative analysis of two distinct datasets, CXI and MFX, showcasing the outcomes obtained when processed using two peak finders, Poscake and PeakNet. The detectors employed for the respective datasets are CSPAD and Rayonix. The analyzed events for the CXI dataset span from 0 to 5000, whereas the MFX dataset encompasses a range of 0 to 8998. Poscake identified 2107 events in the CXI dataset and 465 in the MFX dataset, yielding indexing rates of 71.2% and 84.5%, respectively. In contrast, PeakNet discovered 1991 events in the CXI dataset and 430 in the MFX dataset, exhibiting superior indexing rates of 78.6% Fig. 2: The U-Net architecture in PeakNet. and 90.2%, respectively. Although both peak finders exhibit varying levels of effectiveness, our evaluation has revealed that PeakNet achieves superior indexing rates across both datasets, while also yielding comparable numbers of indexed events. These results suggest that PeakNet may represent a more reliable and efficient solution for the analysis of such data. To provide further context and justification for this efficacy measure, it is important to note that conventional metrics like accuracy, precision and F-1 scores are not always well-defined in the context of peak finding, due to the lack of actual "ground truth". This is because the peak labels used for training and evaluation are derived from another peak finder, rather than being uniquely determined in some fashion. As a result, it is necessary to consider alternative measures of model performance and reliability, such as the indexing rate and the number of indexed events. However, conventional metrics may remain a viable option when applied to synthetic data generated by a simulator. For clarification purposes, we used the _indexamajig_ tool within the _CrystFEL_ data analysis software [17] to perform indexing. We employed multiple indexing methods, including _mosflm_, _xds_ and _xgandalf_, with an additional setting of _-int-radius=4,5,6_. Fig. 3: (a) PeakNet input images and peak labels. (b) PeakNet prediction on an entire CSPAD image. (c) A magnified view of a small region of the detector. Furthermore, while the runtime performance of PeakNet is excellent as compared with other crystallography data analysis methods in Table II, there is a caveat to consider. Although PeakNet is designed as a nearly end-to-end solution, with its neural networks returning a segmentation map that represents the pixel-wise probability of being a peak from input diffraction images, we must employ connected component analysis to convert the segmentation maps into peak coordinates. While the neural network runtime is exceptionally fast, capable of processing 11 Rayonix images (\(1920\times 1920\)) in parallel on a 1080 Ti GPU in just 4ms (equivalent to 0.36 ms per image), the additional time required for connected component analysis is considerably longer and thus a significant runtime bottleneck in the current version of PeakNet. In fact, this bottleneck processing time scales nearly linearly with respect to the number of images, and it adds an additional 30ms of processing time for each image despite running on the same Fig. 4: Peak finding on a Rayonix image using (c) PeakNet and (d) Psocake. (a), (b), (c) and (d) are magnified views of some regions. GPU. Practically speaking, the current version of PeakNet runs at 49ms per Rayonix event on 1080 Ti, which is actually 10 \(\times\) faster than some diffraction image classification algorithms. ## V Conclusions In this paper, we introduced PeakNet, a deep learning-based peak finder that efficiently tackles the Bragg peak finding problem as a semantic segmentation task. The U-Net architecture of PeakNet has been optimized to achieve an ideal balance between efficacy and runtime performance, resulting in a processing speed four times faster than the Posocake peak finder while maintaining superior indexing rates and a comparable number of indexed events. A notable bottleneck in the current version of PeakNet is the connected component analysis, which scales linearly with input data volume, whereas the neural network portion scales beyond linear capability on GPUs, limited largely by GPU memory. It is worth mentioning that the current data reduction concept primarily focuses on processing single events as quickly as possible, centering the design principle on CPUs. However, with neural networks, we propose a new direction towards batch image processing on GPUs, exemplified by PeakNet in the context of serial crystallography. ## Acknowledgment Earlier versions of the U-Net models were built and tested by P.L under the supervision of C.H.Y. The experiments were designed by C.W with inputs from J.B.T and C.H.Y. C.W prepared the datasets, labeled experimental data, and trained and evaluated the refined neural network model. The manuscript was written by C.W and C.H.Y with input from all authors. This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences under Award Number FWP-100643. Use of the Linac Coherent Light Source (LCLS), SLAC National Accelerator Laboratory, is supported by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences under Contract No. DE-AC02-76SF00515.
2305.08960
One Forward is Enough for Neural Network Training via Likelihood Ratio Method
While backpropagation (BP) is the mainstream approach for gradient computation in neural network training, its heavy reliance on the chain rule of differentiation constrains the designing flexibility of network architecture and training pipelines. We avoid the recursive computation in BP and develop a unified likelihood ratio (ULR) method for gradient estimation with just one forward propagation. Not only can ULR be extended to train a wide variety of neural network architectures, but the computation flow in BP can also be rearranged by ULR for better device adaptation. Moreover, we propose several variance reduction techniques to further accelerate the training process. Our experiments offer numerical results across diverse aspects, including various neural network training scenarios, computation flow rearrangement, and fine-tuning of pre-trained models. All findings demonstrate that ULR effectively enhances the flexibility of neural network training by permitting localized module training without compromising the global objective and significantly boosts the network robustness.
Jinyang Jiang, Zeliang Zhang, Chenliang Xu, Zhaofei Yu, Yijie Peng
2023-05-15T19:02:46Z
http://arxiv.org/abs/2305.08960v2
# Training Neural Networks without Backpropagation: ###### Abstract Backpropagation (BP) is the most important gradient estimation method for training neural networks in deep learning. However, the literature shows that neural networks trained by BP are vulnerable to adversarial attacks. We develop the likelihood ratio (LR) method, a new gradient estimation method, for training a broad range of neural network architectures, including convolutional neural networks, recurrent neural networks, graph neural networks, and spiking neural networks, without recursive gradient computation. We propose three methods to efficiently reduce the variance of the gradient estimation in the neural network training process. Our experiments yield numerical results for training different neural networks on several datasets. All results demonstrate that the LR method is effective for training various neural networks and significantly improves the robustness of the neural networks under adversarial attacks relative to the BP method. ## 1 Introduction Neural networks have increasingly more successful applications, such as face recognition [1; 2; 3; 4], voice verification [5; 6; 7; 8] and automation vehicles [9; 10; 11]. Different model architectures are suitable for processing different tasks. For example, multi-layer perceptron (MLP) [12] has a good performance on structural data, convolutional neural network (CNN) [13] is typically adopted for image tasks, recurrent neural network (RNN) [14] is usually employed to process time series data, and graph neural network (GNN) [15] is often used to extract potential relationships between entities. Brain-like neural network structure includes spiking neural networks (SNNs) [16] where neurons communicate via discrete spike sequences, and they are ideal models for event-driven information processing. Backpropagation (BP) [12] is the most important gradient estimation method used for training neural networks [17; 18]. It relies on the chain-rule of differentiation to backpropagate the residual error from the loss function to the previous layers for gradient computation. Despite its widespread use, recent research has highlighted the vulnerability of neural networks trained by BP, particularly under adversarial attacks [19; 20]. Adversarial attacks refer to the subtle manipulation of input data that is imperceptible to humans but can easily deceive neural networks [21]. Meanwhile, BP requires the activation and loss function of the neural networks to be continuous, which limits the freedom of network architecture for modeling complex environments [22]. In contrast to BP, some existing methods entirely abandon the backward process and only require the forward pass for gradient estimation in optimizing the neural networks [23; 24]. One family of training methods without BP performs optimization following the gradient estimated from the correlation between the parameter perturbation and network outcomes. Unfortunately, they usually suffer from curse-of-dimensionality and require significant manual tuning to identify appropriate hyper-parameters [25; 26], so their applicability to deep neural network training is limited. The likelihood ratio (LR) method has emerged as a promising alternative to BP for training neural networks [27]. It provides an unbiased gradient estimation without relying on the recursive gradient calculation. Previous study has developed an LR method to train MLPs on the MNIST dataset. However, there is still significant work to be done in extending the LR method to more widely used neural network architectures. In our paper, we develop a new LR method to train a broader range of neural network architectures. We further propose several methods to reduce the variance of gradient estimation, thereby enhancing the stability of the neural network training process. We conduct extensive experiments on various neural network architectures, i.e., the CNN, RNN, GNN, and SNN, for corresponding classical tasks to verify the effectiveness of the proposed method, including classification on the Cifar-10 [28], Ag-News [29], Cora [30], MNIST [31], and Fashion-MNIST [32] datasets. Our experimental results demonstrate the efficacy of our proposed method in achieving comparable or even better performance with BP while also improving the robustness of the trained models. ## 2 Related Work ### Optimizations of neural networks Optimization methods for neural network training can be roughly categorized into two types, with and without BP. BP-based optimization methods have been studied for a long time and have been effective in training various neural networks on different tasks. Among these methods, the vanilla BP method is utilized for computing the gradients of the neural network parameters with respect to the global loss function. BP relies on the perfect knowledge of computation details between the parameters and loss evaluation, which limits the flexibility of developing neural network architectures and training. Moreover, the neural networks trained by BP are vulnerable to adversarial attacks. Some studies have explored alternative methods without BP for training neural networks. [23] proposes a simultaneous perturbation stochastic approximation (SPSA) method, which approximates the gradient by evaluating the objective function twice with perturbations of opposite signs on the parameters. SPSA is typically sensitive to the choice of hyperparameters. Another approach is the evolution strategy (ES) [24], which injects a population of random noises into the model and optimizes the parameters along the optimal direction of noises. Both SPSA and ES suffer from heavy performance drops as the number of parameters increases, which hinders the application to sophisticated models in deep learning. [27] proposes a novel LR method for training neural networks, which is based on an unbiased gradient estimation with respect to the true gradient computed by the BP. However, previous research has primarily focused on the training of MLPs. In our work, we extend the LR to a wider range of neural network architectures. ### Robustness of neural networks Neural networks are vulnerable to various types of input corruption, including natural noise, out-of-distribution inputs, and adversarial attacks. Natural noise can be caused by various factors such as sensor noise, compression, or transmission errors. The benchmark proposed by [33] evaluates the robustness of neural networks against various types of natural noises, including Gaussian noise, uniform noise, Poisson noise, etc. Out-of-distribution (OOD) robustness is another challenge for real-world applications of neural networks due to the distribution shift between the training and test data. Adversarial attacks are created by adding imperceptible perturbations to benign samples, specifically designed to fool deep neural networks. Although adversarial attacks were first identified in the image domain, subsequent research has revealed their existence across various modalities such as texts, structural graphs, etc. For images, the most widely used methods are the fast gradient sign method (FGSM), iterative fast gradient sign method (I-FGSM), and momentum-based iterative fast gradient sign method (MI-FGSM), which employs the sign of gradient to craft the adversarial perturbation efficiently. For texts, [34] proposes the genetic algorithm (GA) to craft the adversarial text examples with words substituted. [35] proposes the probability-weighted word saliency (PWWS) algorithm, which generates adversarial text examples with semantic consistency by synonym substitution. For graphs, [36] proposes to fool the GNNs by randomly injecting fake edges into the graph (R.A.). [37] proposes to craft adversarial graph examples by manipulating the graph to disconnect internally and connect externally (DICE). In this work, we evaluate the performance of different optimization methods on datasets corrupted by different types of noises as a benchmark. Our proposed LR method shows improved robustness while achieving comparable results to BP on clean data. ## 3 Likelihood Ratio Method for Training Neural Networks ### Gradient estimation using push-out LR To avoid the chain rule of differentiation, [27] introduces additional noises to the forward feeding in each neuron and employs the push-out LR method to estimate the gradient of the MLP. Given an \(L\)-layer MLP, the input to the \(l\)-th layer is denoted as \(x^{l}\in\mathbb{R}^{d_{l}}\), and the output is given by \[x^{l+1}=\varphi(v^{l}),\quad v^{l}=\theta^{l}x^{l}+b^{l}+z^{l}, \quad z^{l}=(\Sigma^{l})^{\frac{1}{2}}\varepsilon^{l},\quad\varepsilon^{l} \sim\mathcal{N}(0,I),\] where \(\theta^{l}\in\mathbb{R}^{d_{l+1}\times d_{l}}\) and \(b^{l}\in\mathbb{R}^{d_{l+1}}\) are the weight and bias terms, \(\varphi\) is the activation function, and \(z^{l}\in\mathbb{R}^{d_{l+1}}\) is a zero-mean Gaussian noise with covariance matrix \(\Sigma^{l}\). Training an MLP is to solve the optimization problem: \(\min_{\theta,b,\Sigma}\frac{1}{|\mathcal{X}|}\sum_{x\in\mathcal{X}}\mathbb{E }\left[L(x^{L})\right]\), where \(L\) is the loss function, and a basic approach is to perform gradient descent following the gradient estimation of \(\mathbb{E}\left[L(x^{L})\right]\). By noticing that conditional on \(x^{l}\), \(v^{l}\) follows \(\mathcal{N}\left(\tilde{v}^{l},\Sigma^{l}\right)\) with the density \(f^{l}(\xi)\), where \(\tilde{v}^{l}=\theta^{l}x^{l}+b^{l}\), we can push the weight \(\theta^{l}\) out of the performance function of \(L(x^{L})\) and into the conditional density as below: \[\mathbb{E}\left[L(x^{L})\right]=\mathbb{E}\left[\int_{\mathbb{R}^ {d_{l}}}\mathbb{E}\left[L(x^{L})|\xi,x^{l}\right]f^{l}(\xi)d\xi\right].\] Under mild integrability conditions, the expectation and gradient operators can be exchanged. Thus, the gradient estimation only requires the density function \(f^{l}\) to be differentiable. By appling the LR technique, it is straightforward to show that \[\nabla_{\theta^{l}}\mathbb{E}\left[L(x^{L})\right]=\mathbb{E} \left[\int_{\mathbb{R}^{d_{l}}}\mathbb{E}\left[L(x^{L})|\xi,x^{l}\right]\nabla _{\theta^{l}}\ln f^{l}(\xi)f^{l}(\xi)d\xi\right].\] With calculation, we can obtain \[\nabla_{\theta^{l}}\mathbb{E}\left[L(x^{L})\right]=\mathbb{E} \left[L(x^{L})(\Sigma^{l})^{-\frac{1}{2}}\varepsilon^{l}(x^{l})^{\top}\right], \tag{1}\] which implies that the gradient estimation in the whole model can be neuron-wise paralleled with the final evaluation of \(L(x^{L})\) and the local information \(x^{l}\) and \(\varepsilon^{l}\). The gradient estimation for \(b^{l}\) is a corollary of Eq. (1), and the LR method also allows us to optimize the noise magnitude as below: \[\nabla_{\Sigma^{l}}\mathbb{E}\left[L(x^{L})\right]=\frac{1}{2} \mathbb{E}\left[L(x^{L})\left((\Sigma^{l})^{-\frac{1}{2}}\varepsilon^{l} \left(\varepsilon^{l}\right)^{\top}(\Sigma^{l})^{-\frac{1}{2}}-(\Sigma^{l})^{ -1}\right)\right],\] which could benefit the capacity of representing the environment and robustness of the model [38]. ### Extension to more architectures **Convolutional neural networks**: Consider the convolutional module in a CNN. Denote the input as \(x=(x^{i}_{j,k})\in\mathbb{R}^{c_{\text{in}}\times h_{\text{in}}\times w_{\text {in}}}\), where the superscript represents the channel index. The output \(v=(v^{o}_{j,k})\in\mathbb{R}^{c_{\text{out}}\times h_{\text{out}}\times w_{ \text{in}}}\) is given by \[v^{o}_{j,k}=(x*\theta)^{o}_{j,k}+b^{o}+z^{o}_{j,k}=\sum_{i=0}^{ c_{\text{in}}-1}\sum_{s=0}^{h_{\text{g}}-1}\sum_{t=0}^{w_{\text{g}}-1}x^{i}_{j+s,k+ }\theta^{o,i}_{s,t}+b^{o}+z^{o}_{j,k}, \tag{2}\] where \(\theta=(\theta^{o,i}_{j,k})\in\mathbb{R}^{c_{\text{out}}\times c_{\text{in}}\times h _{\theta}\times w_{\theta}}\) and \(b=(b^{o})\in\mathbb{R}^{c_{\text{out}}}\) are the weight and bias terms, \(z^{o}_{j,k}=\sigma^{o}\varepsilon^{o}_{j,k}\) is the noise injected to perform the LR method, \(\sigma^{o}\) denotes the noise magnitude, and \(\varepsilon^{o}_{j,k}\) is a standard Gaussian random variable. Other modules in the CNN are abbreviated as a loss function \(L\) parameterized by \(\omega\). Next we study the gradient estimation of \(\mathbb{E}\left[L(v;\omega)\right]\) with respect to \(\theta\). Due to the weight sharing, the minimum unit \(\theta^{o,i}_{j,k}\) is involved in the computation of the \(o\)-th output channel. Therefore, we push \(\theta^{o,i}_{j,k}\) into the conditional density of \(v^{o}\) given \(x^{i}\), i.e., \[\frac{\partial\mathbb{E}\left[L(v;\omega)\right]}{\partial\theta^{o,i}_{j,k} }=\frac{\partial}{\partial\theta^{o,i}_{j,k}}\mathbb{E}\left[\int_{\mathbb{R}^ {h_{\text{out}}\times\text{out}}}\mathbb{E}\left[L(v;\omega)|\xi,x^{i}\right] \ \prod_{s=0}^{h_{\text{out}}-1}\prod_{t=0}^{w_{\text{out}}-1}\frac{1}{\sigma^{o} }\phi\left(\frac{\xi_{s,t}-\tilde{v}^{o}_{s,t}}{\sigma^{o}}\right)d\xi\right],\] where \(\tilde{v}^{o}_{j,k}=(x*\theta)^{o}_{j,k}+b^{o}\). With the LR technique, we can finally obtain the gradient estimation for each convolutional kernel channel as below: \[\nabla_{\theta^{o,i}}\mathbb{E}\left[L(v;\omega)\right]=\mathbb{E}\left[\frac {1}{\sigma^{o}}L(v;\omega)x^{i}*\varepsilon^{o}\right]. \tag{3}\] The formulation (3) is quite computationally friendly and can be implemented by the convolution operator in any standard toolkit. When the kernel tensor is small, the noise generation in (2) may be costly with respect to the parameter size. An alternative way to perturb the neuron is injecting noises to the parameters: \[v=x*\hat{\theta}+\hat{b},\quad\hat{\theta}^{o,i}_{j,k}=\theta^{o,i}_{j,k}+ \sigma^{o}\varepsilon^{o,i}_{j,k},\quad\hat{b}^{o}=b^{o}+\sigma^{o}\delta^{o},\] where \(\varepsilon^{o,i}_{j,k}\) and \(\delta^{o}\) are independent standard Gaussian random variables. Then, we can derive the gradient estimation by pushing \(\theta^{o,i}_{j,k}\) into the conditional density of \(\hat{\theta}^{o,i}_{j,k}\) given \(\theta^{o,i}_{j,k}\) itself, i.e., \[\frac{\partial\mathbb{E}\left[L(v;\omega)\right]}{\partial\theta^{o,i}_{j,k} }=\frac{\partial}{\partial\theta^{o,i}_{j,k}}\mathbb{E}\left[\int_{\mathbb{R} }\mathbb{E}\left[L(v;\omega)|\xi,\theta^{o,i}_{j,k}\right]\ \frac{1}{\sigma^{o}}\phi\left(\frac{\xi-\theta^{o,i}_{j,k}}{\sigma^{o}}\right) d\xi\right]=\mathbb{E}\left[L(v;\omega)\frac{\varepsilon^{o,i}_{j,k}}{\sigma^{o}} \right]. \tag{4}\] Since the variances of Eq. (3) and (4) both depend on the noise dimension, the selection between the two can be determined by the total dimensions of the output and convolutional kernels. **Recurrent neural networks**: The most widely used RNN cells can be summarized as a common structure, where hidden states \(h_{t-1}\) and inputs \(x_{t}\) are delivered to nonlinear process \(\varphi\) after a linear transformation. Some RNN variants also have an extra data flow going through \(\varphi\) without any parameter [39]. Denote the \(t\)-th input and hidden states in a generic RNN as \(x_{t}\in\mathbb{R}^{d_{x}}\) and \(h_{t}\in\mathbb{R}^{d_{h}}\), respectively. The \(t\)-th cell state and hidden state are given by \[(h_{t},c_{t})=\varphi(u_{t},v_{t}),\quad u_{t}=\theta^{hh}h_{t-1}+b^{hh}+z_{t} ^{hh},\quad v_{t}=\theta^{xh}x_{t}+b^{xh}+z_{t}^{xh},\] where \(\theta^{hh}\in\mathbb{R}^{d_{h}\times d_{h}}\) and \(\theta^{xh}\in\mathbb{R}^{d_{h}\times d_{x}}\) are weight matrices, \(b^{hh},b^{xh}\in\mathbb{R}^{d_{h}}\) are bias terms, \(z_{t}^{hh}=(\Sigma^{hh})^{\frac{1}{2}}\varepsilon_{t}^{hh}\), \(z_{t}^{xh}=(\Sigma^{xh})^{\frac{1}{2}}\varepsilon_{t}^{xh}\), \(\varepsilon_{t}^{hh}\) and \(\varepsilon_{t}^{xh}\) are independent standard Gaussian random vectors. Other modules in the RNN are abbreviated as a loss function \(L\) parameterized by \(\omega\). The training objective is \(\frac{1}{|\mathcal{X}|}\sum_{x\in\mathcal{X}}\mathbb{E}\left[L(h,c;\omega)\right]\), where \(h=(h_{1},\cdots,h_{T})\), and \(c=(c_{1},\cdots,c_{T})\). Though \(\theta^{hh}\) participates in the computation of each \(u_{t}\), it is hard to make use of their joint conditional density. Thus, we first treat \(\theta^{hh}\) as different in each calculation step, and then sum up the gradient estimates. Denote the conditional density of \(u_{t}\) given \(h_{t-1}\) as \(f^{l}(\xi)\). The gradient estimation can be unrolled as \[\nabla_{\theta^{hh}}\mathbb{E}\left[L(h,c;\omega)\right]=\sum_{t=1}^{T}\nabla_{ \theta^{hh}}\mathbb{E}\left[\int_{\mathbb{R}^{d_{h}}}\mathbb{E}\left[L(h,c; \omega)|\xi,h_{t-1}\right]f_{t}(\xi)d\xi\right].\] In the same manner as deriving Eq. (1), we can obtain \[\nabla_{\theta^{hh}}\mathbb{E}\left[L(h,c;\omega)\right]=\mathbb{E}\left[L(h,c; \omega)(\Sigma^{hh})^{-\frac{1}{2}}\sum_{t=1}^{T}\varepsilon_{t}^{hh}h_{t-1}^{ \top}\right]. \tag{5}\] The gradient estimates of other parameters are similar. The formulation (5) enables the gradients of the RNN family models to be computed in parallel over input sequences without relying on BP Through Time, which is also one of the main advantages of Transformers [40] over RNNs. Eq. (5) is derived under a generic structure, and their specific forms could be simplified for different cells. Please refer to the appendix for details. **Graph neural networks**: The computation in different GNNs utilizes the matrix representation of graph structures and can be transformed into a combination of matrix multiplication and non-linear mappings so that it is straightforward to train GNNs by LR. Consider the graph structure \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) and a graph convolutional network (GCN) [41] layer, where the input is the matrix of node features \(h\in\mathbb{R}^{|\mathcal{V}|\times d_{\text{out}}}\). GCNs first aggregate local information with degree normalization and then extract output features \(h^{\prime}\) similar to MLPs. To derive an LR estimator, we can inject noises as follows: \[h^{\prime}=\varphi(v),\quad v=\tilde{D}^{-\frac{1}{2}}\tilde{A}\tilde{D}^{- \frac{1}{2}}h\theta+z,\] where \(\theta\in\mathbb{R}^{d_{\text{in}}\times d_{\text{out}}}\) is the weight matrix, \(\tilde{D}\) and \(\tilde{A}\) are the graph degree and adjacency matrices with additional self-loops, \(\varphi\) is the activation function, and \(z\in\mathbb{R}^{|\mathcal{V}|\times d_{\text{out}}}\) is the noise matrix. Graph attention networks (GATs) [42] introduce the attention mechanism in to GNNs and use trainable edge weights but not the adjacency matrix in information aggregation. Given a GAT layer with single-head attention, the extracted features of the \(i\)-th node and the attention coefficient of the \((i,j)\)-th edge are given by \[v_{i}=h_{i}\omega+\xi_{i},\quad u_{i,j}=\theta(v_{i},v_{j})+\zeta_{i,j}, \tag{6}\] where \(\omega\in\mathbb{R}^{d_{\text{in}}\times d_{\text{out}}}\) and \(\theta\in\mathbb{R}^{2d_{\text{in}}}\) are trainable weights, \(\xi_{i}\in\mathbb{R}^{d_{\text{out}}}\) and \(\zeta_{i,j}\in\mathbb{R}\) are injected noises. The normalized attention coefficients and output features are computed as follows: \[\alpha_{i,j}=\frac{\exp(\varphi(u_{i,j}))}{\sum_{k\in\mathcal{N}_{i}}\exp( \varphi(u_{i,k}))},\quad h^{\prime}_{i}=\varphi\big{(}\sum_{j\in\mathcal{N}_{ i}}\alpha_{i,j}v_{i}\big{)}, \tag{7}\] where \(\mathcal{N}_{i}\) is the neighborhood of the \(i\)-th node. Eq. (7) can be regard as a non-parameterized operator, so that we only need to focus on Eq. (6) and use the push-out LR technique to estimate the gradients. **Spiking neural networks**: Consider an \(L\)-layer SNN with leaky integrate-and-fire neurons. The time series input of \(l\)-th layer at time step \(t\) is denoted as \(x^{t,l}\in\mathbb{R}^{d_{l}}\). The next membrane potential and spike out are given by \[u^{t+1,l+1}=ku^{t,l+1}(\mathbf{1}-x^{t,l+1})+\theta^{l}x^{t+1,l}+z^{t+1,l}, \quad x^{t+1,l+1}=I(u^{t+1,l+1},V_{\text{th}}),\] where \(\theta^{l}\in\mathbb{R}^{d_{l}\times d_{l-1}}\) is the synaptic weight, \(k\) is the delay factor decided by the membrane time constant, \(I\) is the Heaviside neuron activation function with a threshold \(V_{\text{th}}\), \(z^{t,l}=(\Sigma^{l})^{\frac{1}{2}}\varepsilon^{t,l}\) with \(\varepsilon^{t,l}\) being standard Gaussian random variable. For all time steps, the potentials are integrated into each neuron. When the current potential passes \(V_{\text{th}}\), the neuron releases a spike signal to the next layer and resets the membrane potential to zero. The spike signal of the last layer at the last time step, namely \(x^{T,L}\), is the final output of the SNN and is passed to the loss function \(L\). Similar to handling weight sharing over time in RNNs, we can obtain \(\nabla_{\theta^{l}}\mathbb{E}[L(x^{T,L})]=\mathbb{E}[L(x^{T,L})\sum_{t=1}^{T}( \Sigma^{l})^{-\frac{1}{2}}\varepsilon^{t,l}(x^{t,l})^{\top}]\). ### Variance reduction **Layer-wise noise injection**: LR method relies on the correlation between noise and network output to estimate the gradient. When the dimension of noises is huge, the correlation is too weak to be identified. Since LR does not require BP, it is allowed to independently perturb each layer or even each neuron and estimate the corresponding gradient, which ensures that the correlation is sufficient to produce meaningful estimates. Intuitively, layer-wise noise injection and gradient estimation can be interpreted as a method for randomized block coordinate descent [43]. **Antithetic variable**: Note that Eq. (1) and other LR estimators calculate the gradient estimate through a Monte Carlo (MC) integration. A simple but effective variance reduction method we can use is the antithetic variates method [44]. Assume we repeatedly evaluate the integrand \(2n\) times to estimate the gradient. We can generate injected noises for the first \(n\) evaluations, and then multiply the noises by \(-1\) for the subsequent \(n\) evaluations. **Quasi-Monte Carlo**: Quasi-Monte Carlo (QMC) [45] generates noise and evaluates the integrand using low-discrepancy sequences, which can fill the integration region better than the pseudo-random sequence used in standard scientific computing toolkits. [46] points out that QMC has better performance than MC when the integrand is smooth and the dimension is relatively small. Given the smoothness of neural networks and the layer-wise perturbation mentioned earlier, it is desirable to try QMC in LR estimation. ### Relationships between LR and other methods **LR and other perturbation-based methods**: Consider a neural module with noise injected into its parameter, i.e., \(v=\varphi(x;\theta+z)\), where \(x\) and \(v\) is the input and output, \(\varphi\) is a non-linear operator, and the noise \(z\) follows a density \(f\) defined on \(\Omega\). Denote the rest of the neural network as a loss function \(L\) with parameter \(\omega\). We can obtain the LR estimator by pushing \(\theta\) into the conditional density of \(z\) as below: \[\nabla_{\theta}\mathbb{E}\left[L(v;\omega)\right]=\nabla_{\theta}\mathbb{E} \left[\int_{\Omega}\mathbb{E}\left[L(v;\omega)|\xi,\theta\right]f(\xi-\theta )d\xi\right]=\mathbb{E}\left[-L(v,\omega)\frac{\nabla_{z}f(z)}{f(z)}\right],\] which incorporates several previous works [47; 48; 49; 24] into the framework of push-out LR method. **LR and reinforcement learning**: We first show that the key theorem in reinforcement learning (RL) can be derived by the push-out LR method. The RL agent is modeled as a parameterized probability distribution \(\pi_{\theta}\), and \(p\) represents the transition kernel of the environment. Denote the action and state at the \(t\)-th time step as \(a_{t}\) and \(s_{t}\), where \(a_{t}\sim\pi_{\theta}(\cdot|s_{t})\), \(s_{t+1}\sim p(\cdot|s_{t},a_{t})\). The reward function is denoted as \(r\), and the cumulative reward is given by \(R(s,a)=\sum_{t=0}^{T-1}r(s_{t+1},a_{t},s_{t})\), which is a function of the state and action sequence. Policy-based RL aims at solving the optimization problem: \(\max_{\theta}\mathbb{E}\left[R(s,a)\right]\). In the same manner as deriving Eq. (5), we can treat \(\theta\) as different at each step and push it into the conditional density of \(a_{t}\) given \(s_{t}\) to perform the LR technique, i.e., \[\nabla_{\theta}\mathbb{E}\left[R(s,a)\right]=\sum_{t=0}^{T-1}\nabla_{\theta} \mathbb{E}\left[\int_{\mathcal{A}}\mathbb{E}\left[R(s,a)|\xi,s_{t}\right]\pi_{ \theta}(\xi|s_{t})d\xi\right]=\mathbb{E}\left[R(s,a)\sum_{t=0}^{T-1}\nabla_{ \theta}\ln\pi_{\theta}(a_{t}|s_{t})\right],\] which results in the well-known policy gradient theorem [50]. Not only can we derive the RL gradient using the push-out LR method, but the reverse is also possible. Consider the LR-based RNN training discussed earlier, where the input \(x_{t}\) and hidden state \(h_{t-1}\) can be viewed as the RL state, and the results of the linear transformation \(u_{t}\) and \(v_{t}\) are viewed as the actions. The parameterized part and the rest in the RNN cell correspond to the RL agent and the simulation environment, respectively, and Eq. (5) can be modified directly from the policy gradient theorem. Therefore, RL and deep learning can be unified from the perspective of the push-out LR framework. RL implicitly leverages the advantages of LR to overcome the problem of a black-box simulation environment that affects gradient backward propagation. ## 4 Evaluations ### Experiment settings Our study focuses on classification tasks with different modalities using various neural network architectures. For image classification, we experiment with ResNet-5 and VGG-8 on the Cifar-10 dataset [28] and SNN on the MNIST [31] and Fashion-MNIST [32] datasets. We use RNNs (RNN, GRU [51], and LSTM [39]) to classify the articles on the Ag-News dataset [29] and use GNNs (GCN and GAT) to classify the graph nodes on the Cora dataset [30]. We compare our method to two baselines (BP and ES) using LR with noise injection on logits (LR-L) and weight/logits (LR-WL) in a hybrid manner. Supplementary materials provide network structures and training details. ### Numerical results on convolutional neural networks **Evaluation criteria**: We evaluate all the methods for CNNs on the following performance criteria: 1) Task performance: the classification accuracy on the benign samples (Ori.); 2) Natural noise robustness: the classification accuracy on the natural noise corrupted dataset, where we adopt the Gaussian, uniform, and Poisson noises; 3) Adversarial robustness: the classification accuracy on the adversarial examples, which are crafted by FGSM, I-FGSM, and MI-FGSM; 4) distribution shift robustness: the classification accuracy on the corrupted dataset with different distribution corresponding to the original dataset, where we transform the colored dataset into the grey images and apply the random mask to images respectively for the construction of OOD datasets. For the adversarial attacks, we set the maximum perturbation for each pixel as \(8/255\), and the maximum number of iterations as \(5\) for I-FGSM and MI-FGSM. For all the evaluations, we consider the corruption on the whole test set. **Results**: As shown in Tab. 1, LR-WL can achieve comparable performance to BP with an improvement of \(0.4\%\) for ResNet-5 and a minor drop of \(2.6\%\) for VGG-8. Meanwhile, LR-WL has a significant improvement in the adversarial robustness of \(8.1\%\) on average. In ResNet-5, the classification accuracy relies more on the training of the later layers, where adding noise to logits incurs less cost, thus LR-L outperforms ES and achieves similar performance as LR-WL. In VGG-8, the situation is exactly the opposite. And the success of LR-WL in CNNs comes from combining the strengths of both noise injection modes. ### Numerical results on recurrent neural networks **Evaluation criteria**: We evaluate all the methods for RNNs on task performance and robustness. For the robustness evaluation, we adopt two corruptions, 1) RanMask, in which we randomly mask a fixed ratio of \(90\%\) of the words in the whole article; 2) Shuffle, which random shuffles words in sentences; 3) adversarial attacks, including PWWS and GA. We set the maximum ratio of word substitutions as \(25\%\). Due to the low efficiency of adversarial text attacks, we consider \(200\) examples for evaluation on adversarial attacks and \(1,000\) for other perturbations. **Results**: The evaluation results of the RNN family can be shown in Tab. 2. Due to the gradient vanishes problem in RNN for long sentences, the conventional BP method can not train the neural network well and it only achieves an accuracy of \(64.7\%\). Without relying on the chain-rule computation, the LR method can mitigate the gradient vanishes problem and improve the performance of RNN significantly by \(5.5\%\). For training GRU and LSTM, the LR method can achieve comparable performance on the clean dataset. On the other hand, neural network training using the LR method can achieve better robustness compared with the BP, namely \(2.3\%\) under the RanMask corruption, \(2.63\%\) under the Shuffle corruption, and \(4.0\%\) under the PWWS attack. \begin{table} \begin{tabular}{c c c c c c c c c c c} \hline \hline \multirow{2}{*}{Models} & \multirow{2}{*}{Alg.} & \multirow{2}{*}{Ori.} & \multicolumn{4}{c}{Natural Noise} & \multicolumn{4}{c}{Adversarial Noise} & \multicolumn{2}{c}{Distribution shift} \\ \cline{4-11} & & & \multicolumn{2}{c}{Gaussian} & Uniform & Poisson & FGSM & I-FGSM & MI-FGSM & Grey & RanMask \\ \hline \multirow{4}{*}{ResNet-5} & BP & 64.4 & 59.4 & 60.7 & 44.2 & 12.1 & 1.1 & 1.7 & 53.5 & 54.1 \\ & ES & 35.1 & 35.1 & 35.0 & 29.5 & 20.8 & 4.1 & 4.9 & 25.3 & 34.3 \\ & LR-L & 62.3 & 61.4 & 61.9 & 50.9 & 26.0 & 6.4 & 9.4 & 55.9 & 55.5 \\ & LR-WL & 64.8 & 64.2 & 64.6 & 55.3 & 26.2 & 6.7 & 9.4 & 58.2 & 59.1 \\ \hline \multirow{4}{*}{VGG-8} & BP & 79.8 & 69.4 & 76.4 & 37.8 & 32.4 & 1.2 & 0.9 & 67.8 & 61.9 \\ & ES & 65.8 & 63.7 & 65.4 & 35.6 & 25.7 & 2.7 & 2.7 & 61.9 & 60.5 \\ \cline{1-1} & LR-L & 58.7 & 50.6 & 56.2 & 30.7 & 27.9 & 11.5 & 10.5 & 48.9 & 47.6 \\ \cline{1-1} & LR-WL & 77.3 & 72.7 & 76.7 & 42.9 & 34.6 & 2.8 & 3.5 & 71.4 & 64.9 \\ \hline \hline \end{tabular} \end{table} Table 1: Results of ResNet-5 and VGG-8. \begin{table} \begin{tabular}{c c c c c c c c c c} \hline \hline \multirow{2}{*}{Attack} & \multicolumn{2}{c}{RNN} & \multicolumn{2}{c}{GRU} & \multicolumn{2}{c}{LSTM} \\ \cline{2-11} & BP & ES & LR & BP & ES & LR & BP & ES & LR \\ \hline Ori. & 64.4 & 65.9 & 70.9 & 88.2 & 84.6 & 88.9 & 89.4 & 85.7 & 89.4 \\ RanMask & 53.4 & 53.4 & 56.7 & 63.9 & 59.2 & 65.5 & 60.1 & 56.2 & 62.1 \\ Shuffle & 62.7 & 63.2 & 69.0 & 87.1 & 80.2 & 88.3 & 88.9 & 82.6 & 89.3 \\ PWWS & 50.0 & 36.5 & 54.0 & 58.5 & 32.0 & 63.0 & 59.0 & 31.5 & 62.5 \\ GA & 37.0 & 23.5 & 40.5 & 56.5 & 20.0 & 60.5 & 59.5 & 21.0 & 62.0 \\ \hline \hline \end{tabular} \end{table} Table 2: Results of the RNN family. ### Numerical results on graph neural networks **Evaluation criteria**: We evaluate all the methods for GNNs on task performance and robustness. For the robustness evaluation, we adopt two adversarial attacks, including R.A. and DICE. For both the R.A. and DICE attacks, we perturb the edge ratio from \(10\%\) to \(100\%\) in \(10\%\) increments compared to the original number of edges and report the average results. For all the evaluations, we consider the corruption on the whole test set. **Results**: As shown in Tab. 3, compared with ES, LR achieves a comparable performance relative to BP, with a performance gap of \(4.8\%\) for GCN and \(0.4\%\) for GAT on the clean dataset. LR further brings an average of \(4.3\%\) improvement for GCN and an average of \(5.1\%\) improvement for GAT on the model robustness. Moreover, the GAT training by LR does not require any additional copy of data, which has a similar computational cost compared to BP. ### Numerical results on spiking neural networks **Evaluation criteria**: We evaluate the performance of different optimization methods by task performance and adversarial robustness. We use FGSM, I-FGSM, and MI-FGSM to craft the adversarial examples on MNIST and Fashion-MNSIT datasets, where the maximum perturbation for each pixel is \(0.1\) for all of the attack methods, the max number of iterations is \(15\), and the step size is \(0.01\) for I-FGSM and MI-FGSM. For all the evaluations, we consider the corruption on the whole test set. **Results**: As shown in Tab. 4, ES fails in training SNN, and the LR method can achieve a higher classification accuracy on clean and adversarial datasets. Specifically, LR has an average improvement of \(1.55\%\) on two clean datasets and \(44.3\%\) on adversarial examples. The activation function in SNN is discontinuous, which hinders the application of the chain rule-based BP method, and the BP adopted here is based on an approximate computation. The unbiasedness of the LR gradient estimation leads to superior results. ### Ablation study We study on the effect of the perturbation on different parts of neurons, the initialization for noise magnitudes \(\sigma\), and the variance reduction techniques. In our experiments, we use MC sampling to generate the noise for experiments except when we claim otherwise. We conduct experiments using ResNet-5 on the Cifar-\(10\) dataset for the ablation study. We use our LR method to estimate the gradient and compare it with that calculated by BP. We compute the cosine similarity to evaluate the performance. A cosine similarity closer to 1 means a more accurate gradient estimation. **Impact of the perturbation on weights and logits**: We present the gradient estimation performance when perturbing the logits only in Fig. 1 (a), the weights only in Fig. 1 (b), weights and logits in a hybrid manner in Fig. 1 (c). Compared to Fig. 1 (b), Fig. 1 (a) shows that the estimated gradient of the last several layers, including the last two convolutional layers and the linear layer, is more accurate with the same number of copies when perturbing the weights rather than the logits. On the other hand, when perturbing the weights, though the estimated gradients for the linear layer is less accurate, it has a higher cosine similarity with that calculated by BP for the first several layers, including the first and second convolutional layers. Motivated by this phenomenon, we hybrid these two perturbing strategies, where we inject the noise on weights for the first two layers and logits for others, respectively. As shown in Fig. 1 (c), the performance of gradient estimation for all five layers has a significant improvement compared to purely adding perturbation on weights or logits. **Impact of the initialization for \(\sigma\)**: Following the design of the aforementioned hybrid noise injection mode, we further study the effect of the initialization for \(\sigma\). From left to right in Fig. 1 (d)-(f), we \begin{table} \begin{tabular}{c c c c c c c} \hline \hline \multirow{2}{*}{Attack} & \multicolumn{3}{c}{GCN} & \multicolumn{3}{c}{GAT} \\ \cline{2-7} & BP & ES & LR & BP & ES & LR \\ \hline Ori. & 80.4 & 57.3 & 75.6 & 82.2 & 34.4 & 81.8 \\ R.A. & 62.5 & 48.2 & 68.9 & 70.4 & 30.1 & 77.3 \\ Dice & 63.2 & 45.7 & 65.4 & 71.3 & 28.9 & 74.6 \\ \hline \hline \end{tabular} \end{table} Table 3: Results of the GNN family. \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline \multirow{2}{*}{Attack} & \multicolumn{3}{c}{GCN} & \multicolumn{3}{c}{GAT} \\ \cline{2-7} & BP & ES & LR & BP & ES & LR \\ \hline Ori. & 80.4 & 57.3 & 75.6 & 82.2 & 34.4 & 81.8 \\ R.A. & 62.5 & 48.2 & 68.9 & 70.4 & 30.1 & 77.3 \\ Dice & 63.2 & 45.7 & 65.4 & 71.3 & 28.9 & 74.6 \\ \hline \hline \end{tabular} \end{table} Table 4: Results of SNN. perform LR with \(\sigma\) setting to \(1\times 10^{-1}\), \(1\times 10^{-2}\), and \(1\times 10^{-3}\), respectively. The optimal choice of \(\sigma\) varies for different noise injection modes. In this network architecture, a larger \(\sigma\) is more suitable for gradient estimation using noise injection on logits, whereas a smaller one is better when injecting noise on weights. Improper initialization of \(\sigma\) can result in low cosine similarities between the estimated gradient by LR and that calculated by BP, which would lead to reduced training efficiency. **Impact of the variance-reduction techniques**: In Fig. 1 (g)-(i), we study the effect of three proposed variance-reduction techniques. Without the layer-wise noise injection, it is difficult to identify the correlation between the noise in each layer and the loss evaluation, which also makes it challenging to select appropriate initial \(\sigma\) for each layer. As shown in Fig. 1 (g), it requires a much larger number of copies to achieve comparable performance on the first two and last layers, and fails to efficiently estimate the gradients of the rest layers. From Fig. 1 (h), it can be seen that without the antithetic variable, there are low similarities between the estimated gradients by LR and that by BP for all convolutional layers even with a large number of copies. In Fig. 1 (i), we apply QMC to generate the noise for gradient estimation. Although the results are close to MC, the neuron-level parallelism is allowed by LR, resulting in a lower dimension of MC integration, which implies that QMC may lead to a more significant improvement. ## 5 Conclusion In our work, we extend the LR method for gradient estimation to a broader range of neural network architectures, including CNNs, RNNs, GNNs, and SNNs. LR completely removes the need for BP during training, thus eliminating the dependency on recursive gradient computations. With variance reduction techniques, our method achieves comparable performance to the BP method on clean data and exhibits improved robustness, particularly in the presence of adversarial attacks. Moreover, we Figure 1: Ablation study on the effect of the perturbation on different parts of neurons, the initialization for noise magnitude \(\sigma\), and the variance reduction technique. also discuss the relationship between LR and other perturbation-based methods and unify RL and deep learning from a bidirectional perspective of the push-out LR framework. We believe that our method has the potential to be a valuable addition to the arsenal of methods available for training neural networks and look forward to further exploring its applications in future work.
2303.09728
The Cascaded Forward Algorithm for Neural Network Training
Backpropagation algorithm has been widely used as a mainstream learning procedure for neural networks in the past decade, and has played a significant role in the development of deep learning. However, there exist some limitations associated with this algorithm, such as getting stuck in local minima and experiencing vanishing/exploding gradients, which have led to questions about its biological plausibility. To address these limitations, alternative algorithms to backpropagation have been preliminarily explored, with the Forward-Forward (FF) algorithm being one of the most well-known. In this paper we propose a new learning framework for neural networks, namely Cascaded Forward (CaFo) algorithm, which does not rely on BP optimization as that in FF. Unlike FF, our framework directly outputs label distributions at each cascaded block, which does not require generation of additional negative samples and thus leads to a more efficient process at both training and testing. Moreover, in our framework each block can be trained independently, so it can be easily deployed into parallel acceleration systems. The proposed method is evaluated on four public image classification benchmarks, and the experimental results illustrate significant improvement in prediction accuracy in comparison with the baseline.
Gongpei Zhao, Tao Wang, Yidong Li, Yi Jin, Congyan Lang, Haibin Ling
2023-03-17T02:01:11Z
http://arxiv.org/abs/2303.09728v3
# The Cascaded Forward Algorithm for Neural Network Training ###### Abstract Backpropagation algorithm has been widely used as a mainstream learning procedure for neural networks in the past decade, and has played a significant role in the development of deep learning. However, there exist some limitations associated with this algorithm, such as getting stuck in local minima and experiencing vanishing/exploding gradients, which have led to questions about its biological plausibility. To address these limitations, alternative algorithms to backpropagation have been preliminarily explored, with the Forward-Forward (FF) algorithm being one of the most well-known. In this paper we propose a new learning framework for neural networks, namely **C**ascaded **F**orward (**CaFo**) algorithm, which does not rely on BP optimization as that in FF. Unlike FF, our framework directly outputs label distributions at each cascaded block, which does not require generation of additional negative samples and thus leads to a more efficient process at both training and testing. Moreover, in our framework each block can be trained independently, so it can be easily deployed into parallel acceleration systems. The proposed method is evaluated on four public image classification benchmarks, and the experimental results illustrate significant improvement in prediction accuracy in comparison with the baseline. Our code is available at: [https://github.com/Graph-ZKY/CaFo](https://github.com/Graph-ZKY/CaFo). ## 1 Introduction Backpropagation (BP) algorithm [1] is a powerful technique that has proven to be effective for training deep neural networks on a wide range of tasks, including image classification [2], semantic segmentation [3] and object detection [4]. The algorithm's ability of adjusting the weights based on the error between the prediction and the ground truth allows the network to learn and improve over time, making it a cornerstone of deep learning and artificial intelligence. Nevertheless, despite its effectiveness, BP also suffers from several limitations in practical applications. These limitations include the problems of local minima [5], vanishing/exploding gradients [6], overfitting [7], slower convergence and non-convex optimization [8], which may negatively impact the training process. Additionally, BP relies on a complete understanding of the computations performed during the forward pass in order to correctly calculate the derivatives, making it difficult to be generalized to the black-box systems where the internal workings are not transparent. Therefore it is important to be aware of these limitations and to consider alternative algorithms for training deep neural networks. Due to the aforementioned limitations and the apparent difference in mechanisms between deep neural network and real cortical neurons, some researchers [9] have raised concern about the biological plausibility of backpropagation, questioning whether the brain implements BP and whether it has some other way of getting the gradients needed to adjust the weights on connections. This has prompted researchers to search for alternate algorithms to train deep neural networks. One of such alternatives is the Forward-Forward (FF) algorithm, which replaces the forward and backward passes of BP with two forward passes that work on positive and negative data respectively, with opposite optimization objectives. FF has shown to be a potential alternative to BP, for its simplicity and flexibility in network architecture design. It makes some preliminary investigations on this area, and there is still a lot of room for further research and in-depth exploration. In this paper we present a flexible and effective learning framework for neural networks, namely **C**ascaded **F**orward (**CaFo**) algorithm, which offers a promising alternative to the traditional BP algorithm. Our CaFo framework consists of multiple cascaded neural blocks, with a layer-wise predictor attached to each block. Each neural block is composed of one or more convolutional layers, pooling layers, normalization layers, activation layers, etc, of which the aim is to project the input feature map into a new feature space. Note that the parameters of all neural blocks are randomly initialized and are fixed across both training and testing, while only the attached layer-wise predictors are trainable. Each layer-wise predictor takes the feature map computed by the corresponding neural block as input and outputs the prediction of the task, and it is trained independently without backpropagation according to the errors between the output and the ground truth. During the test period, the final prediction is a combination of the predictions from all predictors. Roughly speaking, instead of performing backpropagation using the chain rule, the CaFo network just performs forward pass to directly estimate the prediction errors and updates the parameters in-situ. Overall, our algorithm is a step forward on the basis of FF algorithm and takes several advantages. Firstly, it eliminates the necessary for negative sampling, thereby reducing the impact of sampling quality on model performance and increasing stability. Secondly, our method can directly output the prediction of multi-class distribution rather than a simple goodness estimation, and it is thus more suitable for multi-class prediction tasks. Finally, although the neural blocks are cascaded, the predictor attached to each block is trained independently without the need for the training results from prior blocks, so the CaFo algorithm can be easily deployed into parallel acceleration systems. For evaluation we test our algorithm on four public image classification benchmarks, in comparison with FF, the state-of-the-art non-backpropagation algorithm. The experimental results show that our CaFo algorithm exhibits effectiveness and efficiency on image classification task, and outperforms FF by a significant margin. ## 2 Related Work ### Backpropagation Algorithm Backpropagation algorithm is a widely used training algorithm for deep neural networks, the goal of which is to optimize the parameters of the networks by minimizing the prediction error between the network's output and the ground truth. BP uses the gradient descent algorithm for optimization. The gradient of the loss function is calculated using the chain rule [10] of differentiation, where the error is propagated backwards through the network and the parameters are updated in the direction of the negative gradient. BP is favored for its ease of implementation, computational efficiency, and effectiveness on a variety of problems. However, it also suffers from certain limitations that have been deeply studied [11; 12; 13; 14; 15; 16; 17; 18; 19]. Despite these efforts, problems with BP still occur stochastically, highlighting the need for a deep understanding of deep learning and experience in model tuning. Due to the limitations mentioned above, some recent researches [9; 20; 21] have even raised doubts about the biological plausibility of BP. ### Forward-Forward Algorithm The recent doubt raised by Hinton [9] about the biological plausibility of the BP has led to the introduction of the Forward-Forward algorithm as an alternative. FF replaces the forward and backward passes of BP with two forward passes that work on positive and negative data with opposite optimization objectives. FF has been validated to be an effective alternative to BP in some tasks, due to its simplicity and flexibility for network architecture and its ability to handle non-differentiable components. Nevertheless, some potential limitations of FF make it hard to be integrated into existing learning systems, including the high computational demanding of two forward passes, the high dependence on positive and negative sampling strategies, and the rough approximation of optimization objective. More recently, an ingenious learning process, the predictive forward-forward [22] (PFF) process proposed as a generalization of the FF, generalizes and combines the forward-forward algorithm and predictive coding into a robust neural system. It simultaneously learns a representation and generative model in a biologically-plausible manner, providing a promising brain-inspired form of forward-only learning. Our CaFo network, different from the FF algorithm, requires only one forward pass for each training step and eliminates the need for negative sampling, and updates the parameters directly according to the errors between the predicted output and the ground truth. It simplifies both the training and the testing process, and gains significant improvement in prediction accuracy. ## 3 The Proposed Method In this section, we describe the details of the proposed CaFo framework. Following the work of FF, we apply this method to the task of image classification, a well-established challenge in computer vision. To ensure a clear and consistent presentation throughout the paper, we firstly provide an overview of the notations and mathematical symbols used in our analysis. This is followed by an elaboration of the overall architecture of the CaFo framework. Finally we describe the training and inference processes of the proposed approach. ### Notations To ensure clarity and consistency in our presentation, we provide a brief overview of the notations we used in this paper. The scalars used in our analysis are represented by italic lowercase letters (e.g., \(m\), \(n\)), while the vectors used in our method are represented by bold lowercase letters (e.g., \(\mathbf{x}\), \(\mathbf{y}\)). All vectors in this paper are assumed to be column vectors. Bold uppercase letters (e.g., \(\mathbf{W}\), \(\mathbf{Z}\)) are used to represent matrices. Furthermore, the subscripts and superscripts in our notations represent the indices and exponents, respectively. In the rest of the paper, we will provide more detailed definitions and explanations of the notations as needed. ### Pipeline Overview The pipeline of the proposed CaFo framework is depicted in Figure 1, which consists of multiple repeated stacks of two major key components: **Neural Block:** The neural block, abbreviated as _block_ for convenience in the rest of the paper, is comprised of multiple fundamental units, such as convolutional layer, pooling layer, normalization layer, activation layer, etc. Its architecture is dependent on the specific task, and for image classification in our experiments it consists of a convolutional layer followed by a ReLU activation function, a max-pooling layer and a batch normalization layer. Multiple blocks with the same or different architecture are cascaded to extract intermediate representations of images at different scales. It is worth noting that the block can be either parametric or non-parametric, which provides flexibility in its construction. **Predictor:** As illustrated in Figure 1, each block is equipped with a layer-wise predictor that consists of a fully connected layer followed by an activation function (e.g., softmax) if necessary. The predictor takes the intermediate Figure 1: Overall architecture of the CaFo network. Two main components are cascaded in the network: (1) a neural block that extracts representation from source data (e.g., images) at a specific scale, and (2) a predictor that takes the intermediate representation extracted by the block as input and outputs the prediction results. The parameters of predictors can be optimized either in parallel or in sequence by minimizing the loss between the predicted output and the ground truth. representation extracted by the block as input and outputs a prediction result for the task. Since there is only one fully connected layer, the predictor can be trained _in situ_ without backpropagation. ### Training Process In the CaFo network, we randomly initialize the parameters of each block with Kaiming uniform [23] and keep these parameters fixed across both the training and testing process. The blocks act as filters with fixed template coefficients. The experimental results in Tables 1 and 2 demonstrate that, despite their simplicity, these untrainable blocks can still produce discriminative representations for prediction and achieve better performance in comparison with FF. Only the parameters of each predictor are updated during the training phase. It is worth noting that our method enables both parallel and sequential updates of predictors, as the intermediate features extracted by the blocks with fixed parameters, can be obtained by only one forward pass. During the training phase, each predictor independently makes its own greedy decision about the class to which the input belongs, eliminating the need for backpropagation to learn a discriminative model. For each predictor, we optimize it by minimizing the errors between the prediction and the ground truth. Different measure functions can be adopted in our framework, which lead to different optimization strategies and algorithms. In experiments we compare three different loss function: mean square error loss (MSE), cross-entropy loss (CE) and sparsemax loss (SL) [24]. Here we describe the corresponding formulations and optimization algorithms respectively. MSE loss:The optimization objective of the predictor can be expressed as follows: \[\arg\min_{\textbf{W}}\frac{1}{2m}\|\textbf{H}\textbf{W}-\textbf{Y}\|_{F}^{2}, \tag{1}\] where \(m\) denotes the number of training samples, \(\textbf{H}\in\mathbb{R}^{m\times d}\) the \(d-\)dimensional intermediate representation output by the block, \(\textbf{Y}\in\mathbb{R}^{m\times c}\) the one-hot labels of training samples, \(\textbf{W}\in\mathbb{R}^{d\times c}\) the parameters of predictor to be optimized, \(\|\cdot\|_{F}\) the Frobenius norm, and \(c\) indicates the number of classes. The close-form solution for Eq. 1 can be obtained by solving the following equation for **W**: \[\frac{\partial}{\partial\textbf{W}}(\frac{1}{2m}\|\textbf{H}\textbf{W}-\textbf {Y}\|_{F}^{2})=\frac{1}{m}\textbf{H}^{T}(\textbf{H}\textbf{W}-\textbf{Y})=0, \tag{2}\] and the solution of Eq. 1 is: \[\textbf{W}=(\textbf{H}^{T}\textbf{H})^{-1}\textbf{H}^{T}\textbf{Y}. \tag{3}\] Cross-entropy loss:The optimization objective of the predictor is written as: \[\arg\min_{\textbf{W}}-\frac{1}{m}\sum_{i=1}^{m}\sum_{j=1}^{c}\textbf{Y}_{i,j} \cdot\ln\textbf{P}_{i,j}, \tag{4}\] \[\textbf{P}_{i,j}=\frac{\exp(\textbf{H}_{i,:}\textbf{W}_{:,j})}{\sum_{k=1}^{c} \exp(\textbf{H}_{i,:}\textbf{W}_{:,k})}, \tag{5}\] where \(\textbf{Y}_{i,j}\) is the value at \(i\)-th row and \(j\)-th column of **Y**, and \(\textbf{P}_{i,j}\) the output of softmax layer that denotes the confidence level the \(i\)-th training sample belongs to \(j\)-th class. \(\textbf{H}_{i,:}\) and \(\textbf{W}_{:,j}\) respectively denote the \(i\)-th row of **H** and the \(j\)-th row of **W**. As the close-form solution for Eq. 4 is unavailable, we use gradient descent algorithm to optimize it. To do this, we need to calculate the gradient for **W** according to the Jacobian matrix, and set a fixed step size to optimize Eq. 4 iteratively. The Jacobian matrix is calculated by: \[\textbf{J}=\textbf{P}-\textbf{Y}, \tag{6}\] and the gradient for **W** is computed as: \[\frac{1}{m}\sum_{i=1}^{m}[\textbf{H}_{i,:}]^{T}\otimes\textbf{J}_{i,:}, \tag{7}\] where **J** denotes the Jacobian matrix, and \(\otimes\) the Kronecker product. Sparsemax loss:Different to softmax, the sparsemax function [24] produces sparse output probabilities by enforcing a constraint that the output confidence vector has at most a certain number of non-zero elements. It encourages the model to only assign high probabilities to the most relevant classes, while setting all other probabilities to zero. The sparsemax function achieves this by projecting the input vector onto a simplex, which is a convex polytope whose vertices lie on the coordinate axes. The formulaic expression for sparsemax is as follows: \[\mathrm{sparsemax}(\mathbf{z}):=\arg\min_{\mathbf{p}\in\triangle^{c-1}}\| \mathbf{p}-\mathbf{z}\|^{2}, \tag{8}\] where \(\triangle^{c-1}:=\{\mathbf{p}\in\mathbb{R}^{c}|\mathbf{1}^{T}\mathbf{p}=1, \mathbf{p}\geq\mathbf{0}\}\) is the \((c-1)\)-dimensional simplex. The resulting projection is unique and can be computed efficiently using a sorting algorithm [24]. We also use the gradient descent algorithm to approximate the sparsemax loss optimization objective: \[\arg\min_{\mathbf{W}}\frac{1}{m}\sum_{\mathbf{z}\in\mathbf{Z}}\sum_{k=1}^{c}L_ {\mathrm{sparsemax}}(\mathbf{z};k), \tag{9}\] \[L_{\mathrm{sparsemax}}(\mathbf{z};k)=-\mathbf{z}_{k}+\frac{1}{2}\sum_{j\in S (\mathbf{z})}(\mathbf{z}_{j}^{2}-\tau^{2}(\mathbf{z}))+\frac{1}{2}, \tag{10}\] where \(\mathbf{z}\in\mathbb{R}^{c}\) is the output logits, \(\mathbf{Z}=[\mathbf{z}_{1};\mathbf{z}_{2};...;\mathbf{z}_{m}]^{T}\in\mathbb{ R}^{m\times c}\) the set of output logits of \(m\) training samples, \(S(\mathbf{z})\) the support of \(\mathrm{sparsemax}(\mathbf{z})\), and \(\tau(\mathbf{z})=\frac{(\sum_{j\in S(\mathbf{z})}\mathbf{z}_{i})-1}{|S( \mathbf{z})|}\). The Jacobian matrix is calculated by: \[\mathbf{J}=\mathrm{sparsemax}(\mathbf{Z})-\mathbf{Y}, \tag{11}\] and the gradient for \(\mathbf{W}\) is computed using the same formula as Eq. 7. ### Inference Process In the inference phase each predictor outputs a prediction and the final prediction is determined by combining all the individual predictors. In image classification tasks this is typically done by summing the predictions of all the predictors and choosing the index of the maximum value as the predicted class. The inference process for FF involves generation of samples for every label and computation of their goodness, whereas our method is able to directly provide multi-class label distributions. Our method gains higher inference efficiency and is more suitable for multi-class prediction tasks. The detailed optimization algorithm is shown in Algorithm 1. During the training stage, the model generates the prediction block by block at step 3 and step 4, where the \(\mathbf{h}_{j}^{i}\in\mathbb{R}^{d}\) and \(\mathbf{\tilde{y}}_{j}^{i}\in\mathbb{R}^{c}\) are the intermediate representation and prediction of \(i\)-th block for \(j\)-th training sample. At step 5, the predictors are updated based on a specific loss function (e.g., Eqs. 1, 4, and 9), using the corresponding optimization strategy as described in Section 3.3. The predictors consist of a fully connected layer for MSE, and an additional activation layer is followed for CE and SL. During the inference stage, each predictor outputs a prediction for the test set at step 3 and step 4, where \(\mathbf{\hat{y}}_{j}^{i}\in\mathbb{R}^{c}\) is the prediction of \(i\)-th block for the \(j\)-th test sample. Then the predictions of each block (e.g., \(\mathbf{\hat{Y}}^{i}\in\mathbb{R}^{n\times c}\)) are summed to derive the final prediction \(\mathbf{\hat{Y}}\in\mathbb{R}^{n\times c}\). The predicted class for \(j\)-th test samples is then calculated at step 7 as: \(\hat{y}_{j}=\arg\max_{k}\mathbf{\hat{Y}}_{j,k}\). ## 4 Discussion ### Relationship to the Forward-Forward Algorithm Both FF and our method are alternative approaches to backpropagation, as highlighted in [9]. They both have advantages over the traditional backpropagation method, such as the ability to handle non-differentiable components and increased flexibility in network architecture. However, the differences between the two methods are still significant. The graphical depiction and comparison of BP, FF, and proposed CaFo are shown in Figure 2. The diagram style used in the figure is adapted from [22], and we follow the same approach to depict CaFo and highlight the differences between our model and the other two methods. For BP, the model conducts a forward pass to derive the global loss, and then the gradients of the loss function with respect to the weights of the network are computed using the chain rule of calculus. The weights are updated using the obtained gradients and a learning rate hyper-parameter, which controls the size of the step taken in the opposite direction of the gradient. For FF, positive and negative samples are selected and samples are used to calculate a local loss for each layer. Once the local loss of the \(i\)-th layer is calculated (i.e., \(\mathbf{L}_{i}\)), the weights of the \(i\)-th layer (i.e., \(\mathbf{W}_{i}\)) will be updated according to \(\mathbf{L}_{i}\), and after that the \(\mathbf{W}_{i}\) will be fixed during the next training stage for the \((i+1)\)-th layer. The loss for each layer is calculated in sequence, and the layers of FF are sequentially updated by greedy decision. Compared with FF, our CaFo network only has one input stream, avoiding the negative sampling process. Although the sequential update of each block in our framework is similar to FF, the local loss in CaFo directly measures the error between the predicted multi-class distribution and ground truth, rather than a simple goodness estimation. In addition, as the weights of each block (i.e. \(\mathbf{G}_{N}\)) are fixed, the predictor (i.e., \(\mathbf{W}_{N}\)) attached to each block is trained independently without the need for the training results from prior blocks, which makes the framework allow for the parallel training of each predictor. ### Relationship to Bagging Bagging (Bootstrap Aggregating) [25] is an ensemble learning technique that combines multiple models to enhance the overall performance of a machine learning algorithm. The basic idea of bagging is to train some individual models on different subsets of the training data, and then combine their outputs by taking the average of majority vote. Bagging eliminates the variance of the individual models and prevents overfitting by reducing the impact of noisy data points or outliers. This is achieved by generating multiple random samples of the training data, each of which is adopted to train a different model. The results are then combined to make a final prediction. Our method can be viewed as a special case of Bagging, where the predictors are trained on intermediate features extracted at various scales, rather than on the original images. In other words, our approach does not train individual models on different subsets of the training data, but assigns each individual model with a specific intermediate representations of the complete training data, which has the advantage of utilizing the information contained in intermediate features extracted at different scales, thus helping to capture more complex patterns and increase the accuracy of the model. ### Relationship to Co-training Co-training [26] is a popular semi-supervised learning method that trains multiple classifiers on the same dataset with different data views to improve classification accuracy by leveraging unlabeled data. The process involves initializing several classifiers with different feature sets and training them on labeled data subsets. The classifiers then exchange predictions on unlabeled data and retrain on new labeled data subsets using these predictions. This iterative process continues until convergence, with classifiers refining their feature sets and predictions. As a semi-supervised learning technique, co-training is usually applied in scenarios where the labeled data is scarce but there is a large amount of unlabeled data available. In contrast, our method is a fully-supervised approach that trains multiple predictors for each block in parallel, allowing for more efficient and stable optimization of the model in supervised learning scenarios. As our method and co-training share the same idea of training multiple modules to improve the overall performance, our approach can be seen as the first stage of a co-training process, and can be easily transformed into a co-training method if an augmentation strategy for training set is introduced. ## 5 Experiments The aim of this paper is to introduce the CaFo algorithm and to show that it works in relatively small neural networks containing a few million connections. We evaluate the proposed algorithm on four benchmarks in comparison with backpropagation and FF. Future research will focus on investigating the scalability of the proposed approach to handle large neural networks with a much greater number of connections. ### Datasets and Experimental Settings The experiments are conducted on the MNIST [27], CIFAR-10 [28], CIFAR-100 [28] and Mini-ImageNet [29] datasets, all of which are widely-used in evaluating image classification algorithms. The standard splits of MNIST, CIFAR-10 Figure 2: Comparison of BackPropagation (BP) algorithm, Forward-Forward (FF) algorithm and the proposed Cascaded Forward (CaFo) algorithm. and CIFAR-100 are directly used. In the case of Mini-ImageNet, which is often used for evaluating few-shot learning algorithms, all the classes are mixed, and for each class, 500 images are randomly sampled for training and 100 images for testing. We compare the proposed CaFo algorithm with two baseline algorithms: backpropagation and the FF algorithm. Specifically, for MNIST and CIFAR-10, we report the results of both the original version of FF [9] and our reproduced version. For CIFAR-100 and Mini-ImageNet, we only report the results of our reproduced version, as the source code of the original version is not publicly available. For the proposed CaFo algorithm, we set the number of blocks \(r\)=3. Each block consists of a convolutional layer followed by a ReLU activation function, a max-pooling layer and a batch normalization layer. The convolutional layer of the three blocks have the same kernel size, padding, and stride, which are set to be \(3\times 3\), 1, 1, and the output channels are set to be 32, 128, and 512 respectively. As described in Section 3.3, different error measure functions can be adopted in our framework to guide the training. For analysis of the affects of different measure functions, in this section we report the results of three versions of our CaFo algorithms that adopt MSE (CaFo+MSE), cross-entropy (CaFo+CE) and sparsemax loss (CaFo+SL) as the measure function respectively. The predictor is a fully connected layer without bias, followed by a softmax layer for CE and SL. For CaFo+CE and CaFo+SL, each predictor is trained for 5000 epochs on MNIST and CIFAR-10, and for 1000 epochs on CIFAR-100 and Mini-ImageNet. All experiments are run on AMD EPYC 7542 32-Core Processor with a Nvidia GeForce RTX 3090 GPU. ### Performance Comparison The comparison of these algorithms on four datasets is summarized in Tables 1 and 2. It is obvious our method achieves overall improvements in test error rate compared with FF. Even the simplest version (CaFo+MSE) outperforms the reproduced FF on all datasets. For FF, we observe a significant discrepancy in its results between the two tables. FF has low training error when the dataset is easy to discriminate (e.g., CIFAR-10, MNIST), but has high training error when tackling much more complex datasets (e.g., CIFAR-100, Mini-ImageNet). The reason for the underfitting of FF probably lies in that the contrastive loss function based on positive and negative samples may not provide effective guidance. If the training samples for each class are scarce (e.g., CIFAR-10 vs. CIFAR-100) or the samples contain more irrelevant information (e.g., CIFAR-10 vs. Mini-ImageNet), the goodness estimation of positive and negative samples may not effectively help the FF model learn the potential distribution for each class because of the uncertain \begin{table} \begin{tabular}{l c c c c} \hline \hline & \multicolumn{2}{c}{**MNIST**} & \multicolumn{2}{c}{**CIFAR-10**} \\ \hline **Error rate (\%)** & Training & Test & Training & Test \\ \hline **BP (from [9])** & - & 1.40 & 2.00 & 39.00 \\ **FF (from [9])** & - & 1.37 & 24.00 & 41.00 \\ **FF (reprodu.)** & 0.57 & 2.02 & 10.36 & 46.03 \\ \hline **CaFo+MSE** & 0.71 & 1.55 & 12.56 & 34.79 \\ **CaFo+CE** & 0.04 & 1.30 & 9.22 & 32.57 \\ **CaFo+SL** & 0.06 & 1.20 & 9.58 & 33.74 \\ \hline \hline \end{tabular} \end{table} Table 1: Error rate on MNIST and CIFAR-10 \begin{table} \begin{tabular}{l c c c c} \hline \hline & \multicolumn{2}{c}{**CIFAR-100**} & \multicolumn{2}{c}{**Mini-ImageNet**} \\ \hline **Error rate (\%)** & Training & Test & Training & Test \\ \hline **FF (reprodu.)** & 29.32 & 78.67 & 27.84 & 91.71 \\ \hline **CaFo+MSE** & 6.06 & 63.94 & 6.66 & 85.50 \\ **CaFo+CE** & 0.06 & 59.24 & 0.01 & 78.20 \\ **CaFo+SL** & 0.02 & 61.96 & 0.02 & 81.47 \\ \hline \hline \end{tabular} \end{table} Table 2: Error rate on CIFAR-100 and Mini-ImageNet quality of negative samples and the lack of supervision information directly related to the category information. This may lead to the poor fitting for the training set. In contrast, our method exhibits a smaller discrepancy between the two kinds of datasets. The training error rate of CaFo is less than 10% in most trials, demonstrating better fitting ability than FF. For test error rate, the performance gap between the two is still significant, especially on more complex datasets. For fair comparison, the results of CaFo are reported without deployment of any regularization trick into the model, and thus the performance of CaFo still has the potential to be improved by means of regularization strategies. Comparing the three variants of our method, we find that they obtain overall similar performance. CaFo+MSE has the highest test error and train error on all the datasets, which indicates MSE may not be good enough for error estimation in comparison with other two variants that adopt classification-oriented loss. In contrast, both CaFo+CE and CaFo+SL exhibit low and very similar training and test error on the four datasets, indicating the effectiveness of gradient descent optimization strategy and the superiority of CE and SL compared with MSE. ### Time Comparison Furthermore, we report the time required for training and testing our method in comparison with FF as shown in Table 3. For each trial, we report the time required for the entire training process and the test time that includes the time for the forward pass of the test samples and the time for calculating the predicted category. The experimental settings are the same as those in Section 5.1. Comparing our three variants with FF, we observe that our methods exhibit significant improvements in training efficiency, especially for CaFo+MSE, which has a much shorter training time than the other three approaches due to its direct calculation of the close-form solution without iterative training. CaFo+CE and CaFo+SL also show significant improvements in training efficiency on MNIST, CIFAR-10 and CIFAR-100, while taking more time on Mini-ImageNet. This is because we divide the training set of CIFAR-100 and Mini-ImageNet into 20 and 200 training batches, respectively, for the two variants due to memory limitation, which leads to the extra time consumption of data movement. However in the trials of MNIST and CIFAR-10 where the batch number equals one, we find the two CaFo variants are trained significantly faster than FF. In terms of test time, our methods demonstrate comparable efficiency on MNIST and CIFAR-10, but significantly shorter test time on CIFAR-100 and Mini-ImageNet. The reason for this lies in that our methods directly output the prediction of multi-class distribution, rather than compute the goodness estimation for each candidate label as that in FF. It tremendously improves the test efficiency when the number of categories is relatively large, such as on CIFAR-100 and Mini-ImageNet. ### Investigation of the Number of Blocks In Tables 1 and 2 we fix the number of blocks \(r\)=3 for fair comparison with [9], in which the results of BP and FF with three layers are reported. However, the number of blocks may have diverse effects on different datasets. Here we investigate how the number of blocks influences the performance of our method. In Figure 3 the error rates of CaFo+MSE and CaFo+CE with the number of blocks ranging from 1 to 10 are reported. As CIFAR-10 allows for a maximum of five \(2\times 2\) max-pooling layers with \(stride\)=2, and four are available for MNIST, we remove the max-pooling layer from some blocks to keep the number of max-pooling layers within the upper bound if necessary. We observe that the training error rates (dashed lines) continuously decrease with the increasing number of blocks, indicating an enhancement of fitting ability as more learnable predictors are introduced. However, the test error curves (solid lines) show that the model is more or less affected by overfitting when the number of blocks is large. \begin{table} \begin{tabular}{l r r r r r r r r} \hline \hline & \multicolumn{2}{c}{**MNIST**} & \multicolumn{2}{c}{**CIFAR-10**} & \multicolumn{2}{c}{**CIFAR-100**} & \multicolumn{2}{c}{**Mini-ImageNet**} \\ \hline **Time (s)** & Training & Test & Training & Test & Training & Test & Training & Test \\ \hline **FF (reprodu.)** & 1655.22 & 0.02 & 2374.30 & 0.04 & 2376.77 & 0.42 & 3668.35 & 2.03 \\ \hline **CaFo+MSE** & 0.62 & 0.06 & 1.21 & 0.07 & 1.40 & 0.08 & 22.55 & 0.32 \\ \hline **CaFo+CE** & 612.22 & 0.05 & 873.56 & 0.07 & 1178.42 & 0.08 & 8101.42 & 0.52 \\ **CaFo+SL** & 837.56 & 0.06 & 1082.45 & 0.09 & 2192.56 & 0.10 & 14128.06 & 0.49 \\ \hline \hline \end{tabular} \end{table} Table 3: Time for training and testing ### Performance of each Predictor To investigate the performance of each predictor, we report the test error rate of each predictor for CaFo+MSE and CaFo+CE, on three datasets. As shown in Figure 4, the test error of the final prediction is lower than that of each individual predictor. In addition, we surprisingly find that a deep-layer predictor tends to obtain better performance than a shallow-layer one, demonstrating that the deep features play a more important role in enabling the predictor to make correct classifications. Overall, the results in Figure 4 well validate that the combination of all the predictors effectively contributes to the final prediction. Although both FF and our method demonstrate that the simple summing of each block's prediction (goodness) is good enough for comparable performance, however, designing more ingenious heuristic combination strategies for these non-backpropagation approaches remains to be studied. We will concentrate on this problem in future work. ## 6 Conclusion In this paper, we propose a new learning framework for deep neural networks that provides a viable alternative to the backpropagation algorithm. The proposed CaFo demonstrates the feasibility of using only a single forward pass without Figure 4: Test error rate of each predictor. Figure 3: Error rates with different numbers of blocks. the need for additional backward pass in the neural network training process, which offers significant improvements in prediction accuracy, training efficiency and simplicity compared with FF. Extensive experiments well validate the superior performance over the competitive methods on several public benchmarks.
2303.02655
On Modifying a Neural Network's Perception
Artificial neural networks have proven to be extremely useful models that have allowed for multiple recent breakthroughs in the field of Artificial Intelligence and many others. However, they are typically regarded as black boxes, given how difficult it is for humans to interpret how these models reach their results. In this work, we propose a method which allows one to modify what an artificial neural network is perceiving regarding specific human-defined concepts, enabling the generation of hypothetical scenarios that could help understand and even debug the neural network model. Through empirical evaluation, in a synthetic dataset and in the ImageNet dataset, we test the proposed method on different models, assessing whether the performed manipulations are well interpreted by the models, and analyzing how they react to them.
Manuel de Sousa Ribeiro, João Leite
2023-03-05T12:09:37Z
http://arxiv.org/abs/2303.02655v1
# On Modifying a Neural Network's Perception ###### Abstract Artificial neural networks have proven to be extremely useful models that have allowed for multiple recent breakthroughs in the field of Artificial Intelligence and many others. However, they are typically regarded as _black boxes_, given how difficult it is for humans to interpret how these models reach their results. In this work, we propose a method which allows one to modify what an artificial neural network is perceiving regarding specific human-defined concepts, enabling the generation of hypothetical scenarios that could help understand and even debug the neural network model. Through empirical evaluation, in a synthetic dataset and in the ImageNet dataset, we test the proposed method on different models, assessing whether the performed manipulations are well interpreted by the models, and analyzing how they react to them. ## 1 Introduction In this paper, we investigate how to modify a neural network's perception regarding specific human-defined concepts not directly encoded in its input, with the goal of being able to better understand how such concepts affect a models' predictions. Our results suggest that this is possible to do with few labeled data, and without need to change or retrain the existing neural network model. In the last few decades, artificial neural networks have enabled major advances in multiple fields, from text, image, video and speech processing [15], to medicine [16], economics and finance [1], and many others. Numerous successful neural network-based applications have recently been implemented [1], rendering these models ubiquitous. Despite their popularity, neural networks lack interpretability, as they supply no direct human-interpretable indication of why a given result was provided [13]. This led to the development of multiple methods focused on solving this shortcoming (cf. Section 6). Most popular methods typically focus on pointing out which inputs contributed most for the output, or on substituting the model for one that is inherently interpretable [15]. However, such methods leave to the users the burden of understanding why the provided explanation justifies the output, e.g, users must interpret why a particular set of features, e.g., pixels in an image, and their values, leads to the output. Various user studies [1, 12, 13] show that the explanations given by these methods are often disregarded or unhelpful to end users. One reason is that humans do not typically reason with features at a very low level, such as individual pixels in an image - they typically reason with higher-level human defined concepts. For example, a useful explanation from the standpoint of a human as to why a network classified a particular picture of a train as being one of a passenger train will probably refer to the fact that the weapons had windows rather than pointing out specific pixels in the image. Additionally, when humans attempt to determine the causes for some non-trivial phenomena, they often resort to counterfactual reasoning [14], trying to understand how different scenarios would lead to different outcomes. This approach seems also helpful for interpreting artificial neural networks, as it emphasizes the causes - what is changed - and the effects - what mutates as a result of the changes. For example, to better interpret how a neural networks is classifying a particular picture of a train, the user could ask what would have been the output had the picture contained a passenger wagon. We would like to be able to generate such counterfactual scenarios based on how different human-defined concepts impact a model's predictions. The use of human-defined concepts - the concept of passenger wagon in the previous example - is important as it provides semantics with which to describe the counterfactuals, while ensuring that these concepts are meaningful and understandable for the users of a particular model. There has been work on developing methods to generate counterfactuals for artificial neural networks [1], with some even allowing for generating counterfactual samples with respect to particular attributes [21]. However, to our knowledge, they focus on how particular changes to the input samples might affect a models' output, neglecting what the model is actually perceiving from those samples. Furthermore, current methods to produce counterfactuals are typically complex to implement, often requiring the training of an additional model to produce the counterfactuals [20], and the use of specific neural network architectures, e.g., invertible neural net works [11]. In this work, we address the issue of counterfactual generation at a different level of abstraction, focusing on what a model is perceiving from a given input regarding specific human-defined concepts, and how that affects the model's output. The idea would be to "_convince_" the model that it is perceiving a particular concept - for example a passenger wagon - without producing a specific image containing one, and checking its effect on the model's output. By abstracting away from generating particular counterfactual samples and instead focusing on generating counterfactuals with regards to what a model is perceiving about human-defined concepts, we allow for a better understanding of how what is encoded in a neural network impacts a models' predictions in a human-understandable manner. By manipulating what a model perceives regarding specific concepts of interest, we allow users to explore how the output of a neural network depends on what it is perceiving regarding such concepts. In order for it to be possible to generate counterfactuals based on what a model is perceiving instead of what the model is fed, we need to be able to understand what a neural network model is perceiving and be able to manipulate what the model is perceiving with respect to a specific concept. We find inspiration in the research conducted in the field of neuroscience, where highly selective neurons that seemed to represent the meaning of a specific stimulus can be identified [13]. These neurons, known _concept cells_, seemed to "provide a sparse, explicit and invariant representation of concepts" [15, 16]. The discovery of these cells contributed for a better understanding of how the brain - a complex and sub-symbolic system - works and how it relates different concepts [1]. Additionally, through the technique of optogenetics, it is possible to manipulate the activity of specific neurons and learn their purpose [1]. Analogously, being able to assign meaning to specific neurons in a neural network could provide us with a better understanding of what information is encoded in a given model, and how it might be associating different concepts. Moreover, given that typically one has access to a neural networks' internals, we could manipulate the outputs of the neurons to which meaning has been assigned, and examine how such changes affect the model under different circumstances, thus generating counterfactual scenarios. Based on the existing evidence that neurons with similar properties to concept cells seem to emerge in artificial neural networks [1], we hypothesize that by identifying which neurons in a neural network act as concept cells to specific human-defined concepts and by manipulating the activations of such neurons, we should be able to modify a neural network's perception regarding those concepts. In this paper, we propose and test a method to generate counterfactuals scenarios for artificial neural networks by modifying what they are perceiving regarding specific human-defined concepts. In Section 2, we present the proposed method, with Section 2.1 discussing how to pinpoint which neurons identify a particular concept in a neural network, and Sections 2.2 to 2.4 addressing different aspects of the proposed method and providing experimental evidence to support our claims. Sections 3 and 4 illustrate possible applications of the proposed method. In Section 5, we apply and test the method in the setting of the ImageNet dataset. In Section 6, we discuss related work, concluding in Section 7. ## 2 A Method to Manipulate a Neural Network's Perception In order to generate counterfactuals regarding what a neural network model is perceiving about human-defined concepts, we propose a method composed by three main steps. For each concept of interest: 1. [label=)] 2. Estimate how sensitive each neuron is to that concept, i.e., how well its activations separate samples where that concept is present from those where it is absent; 3. Based on the neurons' sensitivity values, select which neurons are considered as "concept cell-like", which we will refer to as _concept neurons_; 4. For each concept neuron, compute two activation values, representing, respectively, the output of that neuron for samples where that concept is present and absent. Consider a neural network model \(\mathcal{M}:\mathbb{I}\rightarrow\mathbb{O}\), which is the model being analyzed, where \(\mathbb{I}\) is the input data space and \(\mathbb{O}\) denotes the model output space. For each considered human-defined concept \(\mathsf{C}\), we assume a set of positive \(P_{\mathsf{C}}\subseteq\mathbb{I}\) and negative \(N_{\mathsf{C}}\subseteq\mathbb{I}\) samples where the concept is, respectively, present/absent. Let \(a_{i}^{\mathcal{M}}:\mathbb{I}\rightarrow\mathbb{R}\) represent the output of the \(i^{\text{th}}\) neuron of neural network model \(\mathcal{M}\) according to some arbitrary ordering of the neurons. The first step of our method consists in estimating how sensitive each neuron is to each considered concept, i.e., how well the activations of each neuron separate samples where a concept is present from those where it is not. We denote by \(r_{i}^{\mathcal{M}}:2^{\mathbb{I}}\times 2^{\mathbb{I}}\rightarrow[0,1]\) the function which, for a given neuron \(i\) of a model \(\mathcal{M}\), takes a set \(P_{\mathsf{C}}\) and \(N_{\mathsf{C}}\) and provides a value representing how sensitive \(i\) is to concept \(\mathsf{C}\). In Section 2.1, we consider different implementations of this function. The second step is to select, for each concept of interest \(\mathsf{C}\), which neurons of \(\mathcal{M}\) are to be considered as concept neurons for \(\mathsf{C}\). Let \(s_{\mathsf{C}}:\mathbb{R}\rightarrow\{0,1\}\) be a threshold function indicating, for a given concept \(C\) and sensitivity value, whether that value is high enough to be considered as a concept cell. Then, for each concept of interest \(\mathsf{C}\), we determine the set of selected neurons as \(S_{\mathsf{C}}=\{i:s_{\mathsf{C}}(r_{i}^{\mathcal{M}}(P_{\mathsf{C}},N_{ \mathsf{C}}))=1,i\in[1,k]\}\), where \(k\) is the number of neurons in \(\mathcal{M}\). In all experiments, we set the threshold value by gradually decreasing it, while testing the model performance in a \(100\) sample validation set for each concept. Once more than \(3\) neurons have been added, by decreasing the threshold, and no performance improvement is seen in the validation set, we set the threshold to the best found value. Lastly, for each neuron selected as a concept neuron for some concept \(\mathsf{C}\), we compute two activation values representing, respectively, when that concept is present and absent. We consider a function \(c_{i}^{\mathcal{M}}:2^{\mathbb{I}}\rightarrow\mathbb{R}\), which computes an activation value for the \(i^{\text{th}}\) neuron of \(\mathcal{M}\), and use it compute an activation value representing the presence of concept \(\mathsf{C}\), \(c_{i}^{\mathcal{M}}(P_{\mathsf{C}})\), and one representing its absence, \(c_{i}^{\mathcal{M}}(N_{\mathsf{C}})\). In Section 2.2, we discuss different implementations of \(c_{i}^{\mathcal{M}}\) and compare how they impact the method's results. Subsequently, when some input \(x\in\mathbb{I}\) is fed to neural network \(\mathcal{M}\), to generate a counterfactual scenario where concept \(\mathsf{C}\) is present or absent, we only need to replace the activation value \(a_{i}^{\mathcal{M}}(x)\) of each neuron \(i\) in \(S_{\mathsf{C}}\) with \(c_{i}^{\mathcal{M}}(P_{\mathsf{C}})\) or \(c_{i}^{\mathcal{M}}(N_{\mathsf{C}})\), respectively. We refer to this step, as _injecting_ a concept into a neural network model. To test our method, we consider the Explainable Abstract Trains Dataset (XTRAINS) [11], a synthetic dataset composed of representations of trains, such as those shown in Figure 2. This dataset contains labels regarding various visual concepts and a logic-based ontology describing how each of these concepts is related. Figure 1 shows a subset of the dataset's accompanying ontology illustrating how different concepts are related to each other. This ontology specifies, for example, that \(\mathsf{TypeA}\) is either \(\mathsf{WarTrain}\) or \(\mathsf{EmptyTrain}\), and that \(\mathsf{WarTrain}\) encompasses those having a \(\mathsf{ReinforcedCar}\) and a \(\mathsf{PassengerCar}\). These concepts have a visual representation, e.g, a \(\mathsf{ReinforcedCar}\) is shown as a car having two lines on each wall, such as the first two cars of the leftmost image in Figure 2. In the experiments described below, we adopt a neural network developed in [13], trained to identify trains of \(\mathsf{TypeA}\) - referred to as \(\mathcal{M}_{\mathsf{A}}\). This neural network was trained to achieve an accuracy of about \(99\%\) in a balanced test set of \(10\leavevmode\nobreak\ 000\) images. We also consider the definition of relevancy given in [11] to establish which concepts are related to the task of a given neural network (w.r.t. the dataset's accompanying ontology). ### Identifying _Concept Cell-like_ Neurons In order to be able to modify a neural network's perception regarding specific human-defined concepts, it is first necessary to determine which neurons in a model are identifying these concepts. Based on the evidence that neural networks seem to have information encoded in their internals regarding concepts which are related with their tasks [11], and that neurons with "concept cell-like" properties seems to emerge in artificial neural networks [12], we hypothesize that neural networks have neurons which act as concept cells for concepts related with the tasks they perform. In this section, we investigate how to determine such neurons in a neural network model. We consider a neuron to be "concept-cell like" for some concept \(\mathsf{C}\) if it is possible to separate samples where \(\mathsf{C}\) is present from those where it is absent based on its activations. Figure 3, illustrates the estimated probability density function of two neurons for some \(P_{\mathsf{C}}\) and \(N_{\mathsf{C}}\) sets. The neuron on the left side seems to act as a concept cell for \(\mathsf{C}\), showing well separated distributions for both sample sets. On the other hand, the neuron on the right side does not act as a concept cell for this concept, given that its activations do not distinguish between both sets of samples. While there are methods, such as TCAV [12] and Mapping Networks [11], which allow for an understanding of whether a given model is sensitive to a certain concept, here we are interested in understanding whether individual neurons are sensitive to that concept. Our goal is not to check whether the model is sensitive to a concept, for which only some neurons might be sufficient, but rather to find all neurons that are sensitive to that concept, so that we can manipulate them. We consider three different implementations of \(r_{i}^{\mathcal{M}}\) to evaluate the adequacy of a neuron \(i\) of \(\mathcal{M}\) as concept cell for \(\mathsf{C}\): * Spearman rank-order correlation between a neuron's activations and the dataset's labels, computed as \(|1-\frac{6\sum d_{j}^{2}}{n(n^{2}-1)}|\), where \(d_{j}=a_{i}^{\mathcal{M}}(P_{\mathsf{C}j})-a_{i}^{\mathcal{M}}(N_{\mathsf{C}j})\) and \(n=|P_{\mathsf{C}}|\), assumes \(|P_{\mathsf{C}}|=|N_{\mathsf{C}}|\); * Accuracy of a linear binary classifier (\(b\)) predicting the dataset's labels from a neuron's activations, computed as \(\frac{|\{x:x\in N_{\mathsf{C}}\cap b(a_{i}^{\mathcal{M}}(x))=0\}|+|\{x:x\in P _{\mathsf{C}}\cap b(a_{i}^{\mathcal{M}}(x))=1\}|}{|N_{\mathsf{C}}|+|P_{\mathsf{ C}}|}\); * Probability density function intersection of a neuron's activations for \(P_{\mathsf{C}}\) and \(N_{\mathsf{C}}\), computed as \(1-\int min(f_{i}^{P_{\mathsf{C}}}(x),f_{i}^{N_{\mathsf{C}}}(x))\,dx\) where \(f_{i}^{P_{\mathsf{C}}}\) is the estimated probability density function of the activation value of neuron \(i\) for the positive samples in \(P_{\mathsf{C}}\) and \(f_{i}^{N_{\mathsf{C}}}\) for the negative samples, assumes the samples to be independent and identically distributed. To test our hypothesis that neural networks should typically Figure 1: Subset of the XTRAINS dataset ontology’s axioms, describing how the trains’ representations are classified. Figure 3: Probability density function of two neurons for samples where a given concept is present/absent. Figure 2: Sample images of the XTRAINS dataset. have concept neurons for concepts which are relevant for their tasks, we compute the three described metrics for four random relevant concepts: EmptyTrain, \(\exists\)has.PassengerCar, \(\exists\)has.ReinforcedCar, and WarTrain; and for four non-relevant concepts: \(\exists\)has.LongWagon, \(\exists\)has.OpenRoolCar, \(\mathsf{LongFreightTrain}\), and MixedTrain. According to our hypothesis, we would expect that for relevant concepts we should be able to find neurons with high values in the described metrics, while for non-relevant concepts we should be unable to do so. For each considered metric, we define a threshold value above which a neuron is considered as a concept neuron for this particular experiment: \(0.85\) for Spearman rank-order correlation, \(0.95\) for the linear classifier's accuracy, and \(0.9\) for the probability density function intersection. Figure 4 shows the amount of identified concept neurons according to each metric for all relevant concepts. For non-relevant concepts, none of the metrics found any concept neurons. For example, for the concept of EmptyTrain, we identified \(13\) concept neurons based on the Spearman rank-order correlation and \(20\) based on the linear classifier's accuracy. In Section 2.2, we explore the importance of the selected concept neurons for the overall performance of our method. Regarding the amount of selected concept neurons, while these are dependent on the specific threshold value, it should be noted that \(\mathcal{M}_{\mathsf{A}}\) contains about \(2\times 10^{6}\) neurons. Thus, as one might expect the amount of selected concept neurons is minuscule (about \(0.001\%\)) in comparison with the total amount of neurons in a model. Interestingly, we found all selected neurons to be in the dense part of the model, with the more abstract and complex concepts being focused on the later dense layers of the model. These results seem to confirm our hypothesis, indicating that it is possible to identify neurons in a neural network model whose activations are highly correlated with human-defined concepts, as long as those concepts are relevant for the task being performed by the model. ### Manipulating a Neural Networks' Perception We now proceed to test our main hypothesis: that it is possible to manipulate a neural network's perception regarding specific human-defined concepts by manipulating the activations of the concept cells for those concepts. We test this hypothesis by considering whether the output of a model, for a given sample, that ought to change had a given concept been identified, indeed changes when that concept is injected. For instance, if we feed an image of a train having a passenger car and no reinforced cars to \(\mathcal{M}_{\mathsf{A}}\), which was trained to identify TypeA trains, we would expect it to output that this is not a TypeA train. However, if we identify the concept neurons for \(\exists\)has.ReinforcedCar - having a reinforced car - and modify their outputs to indicate the presence of a reinforced car, i.e., we _inject_ the concept \(\exists\)has.ReinforcedCar, we expect the model to change its output to identifying a TypeA train. To test our hypothesis in the setting of the XTRAINS dataset, we subsample the dataset into four different \(1000\) sample sets, for each of the four relevant concepts considered in Section 2.1, according to the following criteria: * contains \(\neg\)TypeA samples where the presence of that concept should change \(\mathcal{M}_{\mathsf{A}}\)'s output. * contains \(\neg\)TypeA samples where the presence of that concept should not change \(\mathcal{M}_{\mathsf{A}}\)'s output. * contains TypeA samples where the absence of that concept should change \(\mathcal{M}_{\mathsf{A}}\)'s output. * contains TypeA samples where the absence of that concept should not change \(\mathcal{M}_{\mathsf{A}}\)'s output. To determine whether the output of \(\mathcal{M}_{\mathsf{A}}\) should change, we reason with the ontology provided with the dataset. As the first step in our method is to compute a neurons' sensitivity, we compare the results obtained by the three metrics described in Section 2.1 when applying our method to each of the four described sets. Figure 5, shows the average number of samples where the \(\mathcal{M}_{\mathsf{A}}\)'s output behaved as expected, when computing the concept neurons' activation values \(c_{i}^{\mathcal{M}}\) as the median value of the neurons' activations for both \(P_{\mathsf{C}}\) and \(N_{\mathsf{C}}\), with \(|P_{\mathsf{C}}|=|N_{\mathsf{C}}|=1000\). These results seem to indicate that it is possible to manipulate a neural networks' perception regarding different concepts, by modifying the activations of specific neurons which seem to identify those concepts. This is evidenced by the high percentage of samples where the models' output changed as expected. Furthermore, these results also indicate that the sensitivity metric based on the intersection of the probability density functions of a neuron for positive and negative samples seems to provide more consistent and better results on average. For this reason, all remaining experiments use this metric to compute the sensitivity of a neuron with regards to a concept (\(s_{\mathsf{C}}\)). Regarding the method to compute the neurons' activations, \(c_{i}^{\mathcal{M}}\), besides using the median value of the neurons' activations as in the previous experiments, we considered two alternatives: computing the mode of the neurons' activations, and computing the values through the method described in [14]. Figure 6 shows the average results obtained by each method. Using the median value achieved the best results, having the highest number of correctly classified samples among the three considered metrics for all concepts. We attribute the inferior results obtained by the method in [14] to the fact that the information of each concept neuron is quite redundant, and thus the probe used in the method learns to perform its task from just a small subset of those neurons, which might lead to most neurons having Figure 4: Amount of concept neurons in \(\mathcal{M}_{\mathsf{A}}\) for each concept. Figure 5: Correctly classified samples by sensitivity metric. their activations unchanged. In all remaining experiments, we compute \(c_{i}^{\mathcal{M}}\) as the median activation value of a neuron. Figure 7 shows the percentage of samples where injecting a concept resulted in the expected \(\mathcal{M}_{\mathsf{A}}\) output for each of the sets described above. Overall the results seem to be quite positive, indicating that typically the model outputs the expected result given the concept being injected. The result of injecting \(\neg\)has.PassengerCar in set \(S4\) seems to be somewhat inferior to the remaining results, which we believe to be an indication of some spurious correlation learned by \(\mathcal{M}_{\mathsf{A}}\). The high percentage of samples where the injection of a given sample leads to the expected change in the model's output constitutes strong evidence that it is possible to modify the perception of a neural network regarding specific human-defined concepts, by manipulating the activations of the neurons responsible for the identification of those concepts. ### Importance of Selected Neurons The first two steps in the proposed method have the goal of identifying which neurons in a model act as concept cells for a given concept of interest. These neurons are then used to manipulate a neural networks' perception regarding that concept. Thus, one might wonder about how important the specific set of selected concept neurons is to the overall performance of the method. This Section addresses the effects of varying the threshold value of function \(s_{\mathsf{C}}\). In order to understand how dependent the methods' performance is on the amount of concept neurons, we measure, for each concept, the ratio of samples where the output of the model is consistent with the manipulation performed. Figure 8, shows the average result obtained for all sets described in Section 2.2 for each amount of considered concept neurons. Our results indicate that if function \(s_{\mathsf{C}}\) is either too restrictive, excluding many neurons which act as concept cells, then the few considered ones might be unable to affect the model and consistently produce the desired effects in the model - this is shown by the significant increase in performance as more concept neurons are initially added. However, if function \(s_{\mathsf{C}}\) too tolerant and starts to include neurons that are not effectively concept cells for a given concept performance starts to degrade - as shown by the sharp decrease in performance for higher number of considered concept neurons. As expected, the performance of the proposed method is heavily dictated by the set of selected concept neurons. These results shows how important it is to adjust the threshold value of function \(s_{\mathsf{C}}\) for each concept of interest, which might be done using a simple grid search with cross-validation. ### Counterfactual's Cost The results so far suggest that it is possible to manipulate a neural network's perception of specific human-defined concepts by modifying the activation of neurons which seem to be identifying those concepts. However, they have assumed that there exists plenty of labeled data available, using a total of \(2000\) samples to compute neurons' sensitivity values and the activation values to be injected for each concept. In this section, we assess whether the proposed method is practical when taking into account the amount of necessary data. To assess the cost of applying this method, and how the amount of available labeled data impacts our capacity to properly modify a networks' perception, we compare the average results obtained in the \(4\) sample sets defined in Section 2.2, for each injected concept, when using different amounts of available data. We keep the amount of samples used to validate the threshold value of \(s_{\mathsf{C}}\) unchanged throughout this comparison, using \(100\) samples as validation. Figure 9, shows the average ratio of correctly classified samples after injection of a given concept, for each considered concept. It is observable that even with as few as \(40\) labeled samples - \(20\) samples where the concept is present and \(20\) where it is absent - it is possible to manipulate the neural networks perception for each considered concept with a high degree of success. The method's performance starts to significantly drop when the amount of labeled data is insufficient to be able to properly identify the concept neurons for a given concept. This seems to indicate that as long as there is enough labeled data to identify which neurons in a model are sensitive to the concepts we want to inject, we should be able to manipulate a neural network's perception with regards to that concept. Figure 8: Correct samples by amount of selected concept neurons. Figure 6: Correctly classified samples by method to compute neuron activation. Figure 7: Percentage of correctly classified samples in each set. Figure 9: Correct samples by amount of available labeled data. Interpreting Neural Networks So far, we described a method to generate counterfactuals for a neural network model by modifying the activations of neurons which act as concept cells for human-defined concepts. Through this method, it is possible to understand how different concepts influence the output of a model. What if one wants to be able to understand how a model relates different concepts which are not represented in the model's output? For example, one might be interested in understanding whether \(\mathcal{M}_{\text{A}}\) has learned that EmptyTrains do not have any passenger cars, and that if \(\exists\texttt{has.PassengerCar}\) then that train is not an EmptyTrain. Although both of these concepts are not represented in \(\mathcal{M}_{\text{A}}\)'s output they are both relevant for its task and one might be interested in assuring that the model is capable of understanding the relationship between both. In order to understand whether a given concept which is not represented in the output of a model is being identified by it, we make use of _mapping networks_[12]. These are small neural networks trained to identify in the activations of a neural network model whether a given human-defined concept was identified. To verify whether \(\mathcal{M}_{\text{A}}\) has learned the described relations between both concepts, we train two mapping networks for the concepts of EmptyTrain and \(\exists\texttt{has.PassengerCar}\) - both achieving an accuracy of more than \(99\%\) on a \(1000\) balanced test set. To test whether \(\mathcal{M}_{\text{A}}\) has learned that empty trains do not have any passenger cars, we inject the concept EmptyTrain on \(1000\) samples of trains with passenger cars. Through the mapping network for \(\exists\texttt{has.PassengerCar}\), we verify that in \(95.6\%\) of the samples, after injection, the concept \(\exists\texttt{has.PassengerCar}\) goes from being present to being absent. This is strong indication that this model has learned that empty trains do not have passenger cars. Similarly, we test whether \(\mathcal{M}_{\text{A}}\) has learned that if a train has a passenger car, then it is not an empty train. We inject the concept \(\exists\texttt{has.PassengerCar}\) on \(1000\) samples of empty trains, and observe that in only \(2\%\) of them the concept EmptyTrain turns absent. This indicates that \(\mathcal{M}_{\text{A}}\) has not learned that if a train has a passenger car, it is not an empty train. This example illustrates a possible application of our method, namely to investigate whether certain relations between user-defined concepts are encoded in the model. But more importantly, the model seems to naturally integrate the injected information, impacting the related concepts, as if the injected concept was truly being perceived by the model. ## 4 Correcting a Neural Network's Misunderstandings When a neural network model provides an incorrect result, it is difficult to interpret the cause of the error. It might often be because the model was unable to correctly perceive part of what is represented in its input, in which case the proposed method might be used to "correct" a model's wrong results. To test our hypothesis, we select all XTRAINS samples where \(\mathcal{M}_{\text{A}}\) provides an incorrect result. We train a mapping networks for the concepts \(\exists\texttt{has.ReinforcedCar}\) and \(\exists\texttt{has.PassengerCar}\) - with an accuracy of more than \(99\%\) on a \(1000\) balanced test set - and use them to identify whether \(\mathcal{M}_{\text{A}}\) provided a wrong result because either of these concepts were not perceived by the model when they should have. We select all samples where \(\mathcal{M}_{\text{A}}\) provides a false negative result, indicating that a sample input was not of TypeA when it was. From these samples, we observe that in those where the concept of \(\exists\texttt{has.ReinforcedCar}\) was not identified, but present in the input, injecting \(\exists\texttt{has.ReinforcedCar}\) led to the output being corrected in \(96.1\%\) of the samples. Similarly, when considering the samples where \(\exists\texttt{has.PassengerCar}\) was not identified, but present in the input, injecting it led to the output being corrected in \(98.7\%\) of the samples. These results provide evidence that our method is applicable even when a model provides incorrect results, allowing one to test whether different concepts might be the responsible for the provided result. This enables users to further understand why their models achieved an incorrect output, and identify potential flaws in the model or in its training. ## 5 Validation with Real World Data So far, we have only considered a synthetic dataset. We now consider whether we can replicate similar results with real data. To perform this validation, we consider the setting of the ImageNet dataset [16], and examine the pre-trained MobileNetV2 model from [1]. The proposed method is based on the assumption that by identifying which neurons in a model are responsible for identifying a given human-defined concept, we are able to modify the model's perception of that concept by modifying those neuron's activations. Thus, we focus our validation on whether we can identify the neurons responsible for a given human-defined concept, and whether we are able to modify a model's outputs by injecting that concept. In order to test our method in this setting, we selected a random output class: 'Rhodesian ridgeback', which is a specific dog's breed, and define the concept of 'Rhodesian ridgeback dog face' by selecting a set of \(98\) images from ImagenNet where a Rhodesian ridgeback dog is observable and its face is centered and completely visible, as illustrated in the samples shown in Figure 10. We select this concept based on the assumption that it is a useful concept for the model to classify images of Rhodesian ridgeback dogs. To test whether by injecting this concept we were able to modify the model's perception, we used all \(329\) samples - from training and validation sets - where the model outputs a false negative for the Rhodesian ridgeback class considering its top-1 result. The injection of the concept of 'Rhodesian ridgeback dog face' led \(48\%\) of these samples to output the class of Rhodesian ridgeback. If we consider the \(38\) samples where the model outputs a false negative considering its top-5 Figure 10: Images of ‘Rhodesian ridgeback dog face’. outputs, we were able to modify the model's output in \(71.1\%\) to Rhodesian ridgeback class. Since the concept of 'Rhodesian ridgeback dog face' is not sufficient by itself to lead the model to output the class of 'Rhodesian ridgeback', these results constitute evidence that it is being used by the model, and that by manipulating its concept neurons we are able to modify the model's output. We further test the assumptions that the concept of 'Rhodesian ridgeback dog face' is important for the model to be able to recognize that breed, and that we are able to generate counterfactuals by injecting that concept, by first censoring every 'Rhodesian ridgeback dog face' in ImageNet's validation set, as shown in Figure 11. We then measure the accuracy of the model before censoring the dog's faces, after censoring them, and after injecting the concept of 'Rhodesian ridgeback dog face' on the censored ones. The results, shown in Figure 12, indicate that censoring the dog's face leads to a significant drop in accuracy, which seems to be evidence of its importance for the model. Then, when the concept is injected even after the faces are censored, we observe a big increase in accuracy, indicating that the injected concept was successful in providing some of the information removed by the censoring and helping the model provide the correct classification. These results show us that even in a setting with real data, a moderately sized model (\(\approx 7\times 10^{6}\) neurons), and few labeled data, it is possible to identify which neurons in a model act as concept cells for a given concept and, more importantly, manipulate the neural network's perception of that concept by manipulating the activations of those neurons. ## 6 Related Work The last few years has seen an increase in the development of methods for interpreting artificial neural networks, with proxy-based methods being one of the most popular approaches (c.f., [10]). These methods aim to substitute the model being explained for one that is inherently interpretable and which exhibits similar behavior. In contrast, our approach to generate counterfactual scenarios does not require changing or substituting the original model, which might not always be feasible. Another popular approach is that of saliency and attribution methods (c.f., [11]), where a model's behavior is explained by attributing a contribution value to each input feature representing its contribution to a given prediction. Although these methods might provide some insights into which features contributed most for a given prediction, they do not provide clarification regarding why those contributions justify the output, leaving the burden of understanding how those contributions are related with the specific output to the user. Some counterfactual-based methods (c.f., [10]) suffer from similar issues, providing a counterfactual sample but lacking clarification of why that sample leads to a different result. We believe that counterfactual methods would benefit from abstracting away from only generating specific counterfactual samples, to describing in a human-understandable way why those counterfactuals lead to some other particular output based on how they affect the model. Recently, neuro-symbolic approaches have aimed at understanding whether models are sensitive to specific human-defined concepts [17], and how we can leverage the representations of those concepts in a model to provide explanations for its outputs [11], as well as how to produce human-understandable theories representing the classification process of a model [15]. In this paper, we link both approaches through a method that focuses on what a model is perceiving and enables the generation of counterfactuals via manipulation of what is being perceived wrt. human-defined concepts. ## 7 Conclusions In this paper, we proposed a method to generate counterfactuals regarding what a neural network model is perceiving about human-defined concepts, enabling users to inquire the model, and understand how its outputs are dependent on those concepts. To this end, we explored how to identify which neurons in a model are sensitive to a given concept, and how to leverage such neurons to manipulate a model into perceiving that concept. Through experimental evaluation, we show that it is possible manipulate a neural network's perception regarding different concepts, requiring few labeled data to do so, and without needing to change or retrain the original model. We provide a formalization of the proposed method, and illustrate how it can be applied to generate counterfactuals, but also how to use the it to better understand how different concepts are related within a model, and how to inspect and correct a model's misclassifications. We conclude that for concepts that are related to the task of a neural network, it is often the case that there exist a few specific neurons which encode that concept and are responsible for its identification within the model. Modifying the activations of these neurons, similarly to stimulating concept cells in the human brain, allows one to _trick_ the model that it is perceiving that concept. This simple method seems effective in allowing sophisticated manipulations of high-level abstract concepts, enabling users to explore, with little effort, how the model would respond in different scenarios. In the future, we are interested in exploring how to leverage this method to search for learned biases, and identify missing or undesired associations between concepts. ## Acknowledgments The authors would like to acknowledge the support provided by FCT through PhD grant (UI/BD/151266/2021) and strate Figure 11: Images of censored ‘Rhodesian ridgeback dog face’. Figure 12: Model accuracy regarding ‘Rhodesian ridgeback’ class. gic project NOVA LINCS (UIDB/04516/2020).
2302.05213
CEN-HDR: Computationally Efficient neural Network for real-time High Dynamic Range imaging
High dynamic range (HDR) imaging is still a challenging task in modern digital photography. Recent research proposes solutions that provide high-quality acquisition but at the cost of a very large number of operations and a slow inference time that prevent the implementation of these solutions on lightweight real-time systems. In this paper, we propose CEN-HDR, a new computationally efficient neural network by providing a novel architecture based on a light attention mechanism and sub-pixel convolution operations for real-time HDR imaging. We also provide an efficient training scheme by applying network compression using knowledge distillation. We performed extensive qualitative and quantitative comparisons to show that our approach produces competitive results in image quality while being faster than state-of-the-art solutions, allowing it to be practically deployed under real-time constraints. Experimental results show our method obtains a score of 43.04 mu-PSNR on the Kalantari2017 dataset with a framerate of 33 FPS using a Macbook M1 NPU.
Steven Tel, Barthélémy Heyrman, Dominique Ginhac
2023-02-10T12:32:18Z
http://arxiv.org/abs/2302.05213v1
# CEN-HDR: Computationally Efficient neural Network for real-time High Dynamic Range imaging ###### Abstract High dynamic range (HDR) imaging is still a challenging task in modern digital photography. Recent research proposes solutions that provide high-quality acquisition but at the cost of a very large number of operations and a slow inference time that prevent the implementation of these solutions on lightweight real-time systems. In this paper, we propose CEN-HDR, a new computationally efficient neural network by providing a novel architecture based on a light attention mechanism and sub-pixel convolution operations for real-time HDR imaging. We also provide an efficient training scheme by applying network compression using knowledge distillation. We performed extensive qualitative and quantitative comparisons to show that our approach produces competitive results in image quality while being faster than state-of-the-art solutions, allowing it to be practically deployed under real-time constraints. Experimental results show our method obtains a score of 43.04 \(\mu\)-PSNR on the _Kalantari2017_ dataset with a framerate of 33 FPS using a Macbook M1 NPU. The proposed network will be available at _[https://github.com/steven-tel/CEN-HDR_](https://github.com/steven-tel/CEN-HDR_) Keywords:High Dynamic Range Imaging, Efficient computational photography ## 1 Introduction In the last decades, applications based on computer vision have become increasingly important in everyday life. Currently, many research works are conducted to propose more reliable algorithms in areas such as object detection, action recognition, or scene understanding. However, the accuracy of these algorithms depends largely on the quality of the acquired images. Most standard cameras are unable to faithfully reproduce the illuminations range of a natural scene, as the limitations of their sensors generate a loss of structural or textural information in under-exposed and over-exposed regions of the acquired scene. To tackle this challenge, sensors with a higher dynamic range (HDR) have been proposed [17, 26] to capture more intensity levels of the scene illumination, but these solutions are expensive, preventing high dynamic range acquisition from being readily available. Towards making HDR imaging practical and accessible, software solutions were proposed, based on the emergence of deep learning in computer vision applications. They acquire one Low Dynamic Range (LDR) image and try to expends its dynamic range thanks to a generative adversarial network [12, 37, 8]. Although these methods produce images with a higher illumination range, they have limitations in extending the dynamic of the input image to the dynamic of the acquired scene. A more effective approach is to acquire multiple LDR images with different exposure times and merge them into one final HDR image. Traditional computer vision algorithms [3] allow the acquisition of good-quality static scenes when there is no camera or object motion between images with different exposure times. However, in a lot of use cases, images are captured in a rapid sequence from a hand-held device resulting in inevitable misalignments between low dynamic range shots. Therefore scenes with motions introduce new challenges as ghost-like artifacts for large motion regions or loss of details in occluded regions. Following recent advances in the field of deep learning, several methods based on Convolutional Neural Network (CNN) were proposed to spatially align input frames to a reference one before merging them into a final HDR image. State-of-the-art deep learning solutions for multi-frame merging HDR [34, 13] tend to be based on a previously proposed method[30] and add additional processing to increase the accuracy of the HDR merging system. As a consequence, the computational cost and execution time are significantly increased, preventing these solutions to be used in lightweight systems and/or in real-time applications. Then, the primary goal of HDR imaging software solutions, which was to make HDR imaging more widely available compared to hardware solutions, is therefore not being achieved at all. Therefore, in this paper, we propose a Computationally Efficient neural Network for High Dynamic Range imaging (CEN-HDR). CEN-HDR is based on an encoder-decoder neural network architecture for generating ghost-free HDR images from scenes with large foreground and camera movements. Unlike previously published solutions, we decided to develop a new approach keeping in mind the constraint of inference time and computational cost. 1. We propose CEN-HDR a novel efficient convolutional neural network based on a new attention mechanism and sub-pixel convolution that overcomes ghost-like artifacts and occluded regions while keeping a low computational cost, allowing our solution to be implemented in real-time on a lightweight system. 2. We demonstrate the efficiency of network compression for the realization of CEN-HDR by applying a knowledge distillation scheme. 3. We perform extensive experiments to determine the best trade-off between accuracy and inference cost with the main objective to demonstrate the relevance of CEN-HDR. ## 2 Related Works We briefly summarize existing HDR merging approaches into two categories: deep learning-based architectures and efficient learning-based architectures. In the first category, the proposed methods aim to achieve better quality in HDR imaging without taking inference cost into account. Approaches belonging to the second category seek to optimize the compromise between the quality of the generated images and the computation cost. This leads to the use of new operators in the proposed deep learning architectures. ### Deep learning based HDR merging Using multiple input images for HDR generation leads to the need to align the features of the LDR images to the reference image. The first common method for feature registration is by computing the motion between inputs features using an optical flow algorithm. Multiple studies [31, 7, 22] used Liu[11] optical flow algorithm. In _Kalantari et al.[7]_, the input images are aligned by selecting the image with better pixels as a reference and computing the optical flow between this reference and other input LDR images. Then, the warped images are fed to a supervised convolutional neural network (CNN) to merge them into an HDR image. However, since the optical flow algorithm initially assumes that the input images have the same exposure time, trying to warp the different exposures with occluded regions can result in artifacts in the final HDR image. To address this issue, _DeepHDR[28]_ proposes an image translation network able to hallucinate information in the occluded regions without the need for optical flow. Moreover, many solutions have been developed to correct the ghost effect introduced by the misalignment of the input images. In _AHDRNet[30]_ an attention module is Figure 1: Architecture of the proposed CEN-HDR solution. The spatial size of input features is divided by 2 at the encoding step. The attention module allows registering non-reference features to the reference ones. The full spatial size is recovered thanks to the pixel shuffle operation. proposed to emphasize the alignment of the input images to the reference image. Input images are then merged using several dilated residual dense blocks. The high performance of the AHDRNet network led it to be used as a base network for other methods. For example, _ADNet_[13] follows the same main architecture that AHDRNet but adds a Pyramidal alignment module based on deformable convolution allowing a better representation of the extracted features but in the counterpart of a larger number of operations. ### Efficient learning-based HDR merging architectures To our best knowledge, the first architecture which aims to be efficient was proposed by _Prabhakar et al._[21] by processing low-resolution images and upscaling the result to the original full resolution thanks to a bilateral guided upsampling module. Recently, the HDR community tends to focus more on efficient HDR image generation[20] and no longer only aims at improving the image quality but also at significantly limiting the number of processing operations. This results in efficient solutions such as _GSANet_[9] that propose efficient ways to process gamma projections of input images with spatial and channel attention blocks to increase image quality while limiting the number of parameters. _Yu et al._[36] introduce a multi-frequency lightweight encoding module to extract features and a progressive dilated u-shape block for features merging. Moreover, the different standard convolution operations are replaced by depth-wise separable convolution operations firstly proposed in [2], they are composed of a depth-wise convolution followed by a pointwise convolution which allows more efficient use of model parameters. Another efficient method, proposed by _Yan et al._[33] is a lightweight network based on an u-net[23] like encoder-decoder architecture, allowing for spatially reduced processed features. While these solutions focus on the number of performed operations, their inference time still remains too long to be considered as real-time solutions. ## 3 Proposed Method We consider three LDR images \(I_{i}\in\mathbb{R}^{3\times H\times W}\) with their respective exposure times \(t_{i}\) as inputs. The generated HDR image is spatially aligned with the central LDR frame \(I_{2}\) selected as the reference image. To make our solution more robust to exposure difference between inputs, the respective projection of each LDR input frame into the HDR domain is calculated using the gamma encoding function described in Eq. 1, following previous works [13, 18, 30]: \[H_{i}=\frac{I_{i}^{\gamma}}{t_{i}},\quad\gamma=2,2 \tag{1}\] Where \(H_{i}\in\mathbb{R}^{3\times H\times W}\) is the gamma-projected input. Then, each input is concatenated with their respective gamma-projection to obtain \(L_{i}\in\mathbb{R}^{6\times H\times W}:\) \[L_{i}=I_{i}\oplus H_{i} \tag{2}\] where \(\oplus\) represents the concatenation operation. \(L_{i}\) will then be fed to our proposed merging network whose architecture is detailed in Fig. 1. ### Feature encoding Using high-resolution images as inputs presents an additional challenge in the design of a real-time HDR merging network. To solve such a problem, previous works [29, 33] propose to use a U-net[23] like architecture to reduce the spatial size of the features processed by the merging network. However, a too large reduction of the spatial dimensions causes the extraction of coarse features that degrade the final result. So, we decide to limit the spatial reduction to 2 by using an encoder block composed of 2 sequential convolutions as described in Eq. 3. \[F_{i}=conv_{E_{1}}(conv_{E_{2}}(L_{i})) \tag{3}\] where \(F_{i}\in\mathbb{R}^{32\times\frac{H}{2}\times\frac{W}{2}}\) is the features map extracted from the encoder for each LDR input. \(conv_{E_{1}}\) and \(conv_{E_{2}}\) are 3x3-convolution layers extracting respectively 16 and 32 features map. The spatial size is divided by 2 setting a stride of 2 for \(conv_{2}\). ### Attention module The final generated HDR image must be aligned with the reference image. To address this requirement [30] demonstrates the effectiveness of using a spatial Figure 2: Illustration of the attention module composed of 2 branches, respectively responsible for spatial attention and channel attention. A sigmoid activation function is used to keep the value between 0 and 1. attention module after the encoding step. Since then, many spatial and channel attention modules have been proposed in the literature which can be integrated into networks to improve their performance. According to the inference cost study done in Table 5, we propose the _Spatial-Channel Reference Attention Module_ (SCRAM) a slightly modified version of the Bottleneck Attention Module (BAM) proposed in [19]. Indeed while BAM aims to generate a mask of its input feature maps, in our case we want to generate attention maps from the concatenation of reference and non-reference features, these attention maps are then applied to non-reference features only resulting in a reduction of the number of feature maps in the proposed attention module. Moreover, batch normalization is not applied in SCRAM. The detailed structure of SCRAM is illustrated in Fig. 2. The non-reference features \(F_{i\neq 2}\) are concatenated with the features of the reference image: \[X_{i}=F_{i}\oplus F_{2=ref},\quad i\neq 2 \tag{4}\] where \(X_{i}\in\mathbb{R}^{64\times\frac{H}{2}\times\frac{W}{2}}\) is the input of SCRAM and \(\oplus\) is the concatenation operator. Following [19], SCRAM is composed of two branches respectively responsible for the spatial and channel features alignment of the non-reference images to the reference ones: \[A_{i}=\sigma(s(X_{i})+c(X_{i})),\quad i\neq 2 \tag{5}\] where \(s\) is the spatial attention branch and \(c\) is the channel attention branch. The sum of produced features by each branch passes through \(\sigma\), a sigmoid activation function to keep the output values between 0 and 1. _Spatial attention:_ The objective of the spatial branch is to produce an attention map allowing to keep the most relevant information for the spatial alignment of the non-reference images to the reference one. To limit the computation, we first reduce the number of features map by 3 using a pointwise convolution. With the objective to extract more global features while keeping the same computation, we then make the receptive field larger by employing 3 dilated convolutions [35] layers with a factor of dilatation set to 2. The final attention map of size \((1,H,W)\) is produced by using a pointwise convolution and then expanded across the channel dimension to obtain \(A_{S}\in\mathbb{R}^{32\times H\times W}\). _Channel attention:_ This branch aims to perform a channel-wise feature recalibration. We first squeeze the spatial dimension by applying a global average pooling which sums out the spatial information to obtain the features vector of size \((64,1,1)\). A multilayer perceptron with three hidden layers is then used in purpose to estimate cross-channel attention. The last activation size is set to 32 to fit the number of channels of the non-reference features \(F_{i}\). Finally, the resulting vector map is spatially expanded to obtain the final feature map \(A_{C}\in\mathbb{R}^{32\times H\times W}\). The Attention features \(A_{i}\) are then used to weight the non-reference features \(F_{i}\): \[F_{i}^{\prime}=F_{i}\otimes A_{i},\quad i\neq 2 \tag{6}\] where \(\otimes\) is the element-wise product and \(F_{i}^{\prime}\) is the aligned non-reference features. For the reference features we set \(F_{2}^{\prime}=F_{2}\). ### Features merging While most of the computation is usually done in the merging block [13, 18, 30], we propose a novel efficient feature merging block that first focuses on merging the non-references features. Each feature map \(F_{i}^{\prime}\) produced by the encoder goes through a convolution layer: \[M_{i}=conv_{M_{1}}(F_{i}^{\prime}) \tag{7}\] Where \(conv_{M_{1}}\) is a 3x3-convolution producing \(M_{i}\in\mathbb{R}^{64\times\frac{H}{2}\times\frac{W}{2}}\). Then we focus on merging the non-reference features maps by concatenating them and feeding the result features in a convolution layer: \[M_{\text{non-ref}}=conv_{M_{2}}(M_{1}\oplus M_{3}) \tag{8}\] where \(conv_{M_{2}}\) is a 3x3-convolution and \(M_{\text{non-ref}}\in\mathbb{R}^{64\times\frac{H}{2}\times\frac{W}{2}}\) is the non-reference merged features. As we emphasize the reference features throughout all our network, here we merge our reference features with the non-reference \(M_{non-ref}\) only by adding them together to limit the number of features map processed later: \[M=conv_{M_{4}}(conv_{M_{3}}(M_{2}+M_{non-ref})) \tag{9}\] where \(conv_{M_{3}}\) and \(conv_{M_{4}}\) are 3x3-convolutions producing each 64 features map and \(M\in\mathbb{R}^{64\times\frac{H}{2}\times\frac{W}{2}}\) contains features from all LDR input images. ### Features decoding The role of the decoder is to produce the final HDR image from the features produced by the merger block. At the encoding stage, we divided the spatial dimensions by 2. While the original spatial size is usually recovered using bilinear upsampling or transposed convolution[14] operation, we propose to use the pixel shuffle operation first proposed in [25], it is presented as an efficient sub-pixel convolution with a stride of \(1/r\) where \(r\) is the upscale factor. In our case, we set \(r=2\). As illustrated in Fig. 3, the pixel shuffle layer rearranges elements in a tensor of shape (\(C\times r^{2},H,W\)) to a tensor of shape (\(C,H\times r,W\times r\)): \[D=PixelShuffle(M) \tag{10}\] where \(D\in\mathbb{R}^{16\times H\times W}\) is the resulting upscaled features. The final HDR image is obtained by following the Eq 11. \[HDR=\sigma(conv_{D}(D+S_{2})) \tag{11}\] where \(S_{2}\) is the reference features extracted by the first convolution layer of our network \(conv_{E_{1}}\) to stabilize the training of our network. Finally, we generate the final HDR image using a 3x3-convolution layer, followed by a Sigmoid activation function. ## 4 Experimental Settings ### Datasets The CEN-HDR network has been trained using the dataset provided by [7] composed of 74 training samples and 15 test samples. Each sample represents the acquisition of a dynamic scene caused by large foreground or camera motions and is composed of three input LDR images (with EV of -2.00, 0.00, +2.00 or -3.00, 0.00, +3.00) and a reference HDR image aligned with the medium exposure image. The network has also been separately trained and tested using the dataset from the NTIRE[20] dataset, where the 3 LDR images are synthetically generated from the HDR images provided by [4]. The dataset is composed of 1500 training samples, 60 validation samples, and 201 testing samples. The ground-truth images for the testing sample are not provided. ### Loss function Following previous works [7, 30], the images have been mapped from the linear HDR domain to the LDR domain before evaluating the loss function. In order to train the network, the tone-mapping function has to be differentiable around zero, so, the \(\mu\)-law function is defined as follows: \[T(H)=\frac{log(1+\mu H)}{log(1+\mu)},\quad\mu=5000 \tag{12}\] Figure 3: Illustration of the pixel rearrangement by the pixel shuffle layer for an upscale factor set to \(r=2\) and an input shape of \((4,4,4)\). In the proposed solution, the input shape is \((64,\frac{H}{2},\frac{W}{2})\), the produced output size is \((16,H,W)\). where H is the linear HDR image and \(\mu\) the amount of compression. To make an efficient network, a network compression method defined as knowledge distillation proposed in [5] has been used. By using knowledge distillation, we assume that the capacity of a large network is not fully exploited, so the objective is to transfer the knowledge of this large teacher network to our lighter network as described in the Eq. 13. \[\mathcal{L}=\alpha\times\mathcal{L}(T,T_{GT})+(1-\alpha)\times\mathcal{L}(T,T_{ Teacher}) \tag{13}\] where \(\mathcal{L}\) is the \(L_{1}\) loss function. \(T\) is the tone mapped prediction of our network, \(T_{GT}\) the tone mapped ground truth provided in the dataset and \(T_{Teacher}\) the tone mapped prediction of the large teacher network. we use the HDR-GAN [18] model as the teacher. \(\alpha\) is a trade-off parameter set to 0.2. Moreover, the method proposed by [7] to produce the training dataset focuses mainly on foreground motion. It does not allow the generation of reliable ground-truth images for chaotic motions in the background, such as the movement of tree leaves due to wind, which results in a ground truth image with blurred features that do not reflect reality. This has the effect of producing a greater error when the predicted image contains sharper features than in the ground truth image. Using also a predicted image from a teacher model allows for dealing with this data misalignment. The comparison of the performance obtained between training done with knowledge distillation and without is made in Table. 1. Figure 4: Qualitative comparison of the proposed CEN-HDR solution with other HDR merging methods. The cropped patch demonstrates that the proposed efficient network has equivalent capabilities to the state-of-the-art methods to correct the ghost effect due to the large movement in the scene. ### Implementation details The CEN-HDR network has been trained using cropped patches of size \(256\times 256\) pixels with a stride of 128 and evaluated on the full-resolution test images. During training, random augmentations are applied on the cropped patch such as horizontal symmetry and rotation of 90, 180, or 270 degrees. Training has been done using the Adam optimizer with a batch size of 8. The learning rate is initially set to \(10^{-4}\), keep fixed for 80 epochs, and decreased by 0.8 every 20 epochs after. The training lasts for 500 epochs. \begin{table} \begin{tabular}{l|l l l l l} \hline \hline Training method & \(\mu\)-PSNR & PSNR & \(\mu\)-SSIM & SSIM & HDR-VDP2 \\ \hline w/o knowledge distillation & 40.8983 & 40.0298 & 0.9772 & 0.9926 & 62.17 \\ with knowledge distillation & 43.0470 & 40.5335 & 0.9908 & 0.9956 & 64.34 \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison of the proposed CEN-HDR architecture performances with and without knowledge distillation. In the first case, the network is trained with the HDR ground-truth proposed in [7]. In the second case, we also use the prediction of HDR-GAN [18] as label (Eq. 13). In both cases, the network is trained for 500 epochs using the \(l_{1}\) criterion. Figure 5: Comparison of the proposed CEN-HDR solution with other HDR merging methods. The X-axis represents the mean runtime using the M1 NPU with an input size of 1280x720 pixels. The Y-axis is the fidelity score on the test images from [7] dataset. The best solutions tend to be in the upper left corner. The radius of circles represents the number of operations, the smaller the better. ## 5 Experimental Results ### Fidelity performance In Table 2 and Fig. 4, the proposed CEN-HDR solution is compared against seven lightweight state-of-the-art methods: [24] and [6] are based on input patch registration methods. [7] is based on a sequential CNN, the inputs need first to be aligned thanks to an optical flow algorithm. For [28], the background of each LDR input is aligned by homography before being fed to an encoder-decoder-based CNN. [32] proposes an encoder-decoder architecture with a non-local attention module. [30] is a CNN based on an attention block for features registration and on multiple dilated residual dense blocks for merging. [18] is the first GAN-based approach for HDR merging with a deep supervised HDR method. Quantitative evaluation is done using objective metrics. The standard peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) are computed both directly in the linear domain and after tone mapping by applying the \(\mu\)-law function (Eq. 12). The HDR-VDP2[15] metric predicts the quality degradation with the respect to the reference image. We set the diagonal display size to 24 inches and the viewing distance to 0.5 meter. In addition, PU-PSNR and PU-SSIM are calculated by applying the encoding function proposed in [16] with a peak luminance set to 4000. In Fig. 6, we present the results for 3 test scenes of the NTIRE[20] dataset. While the network can produce images with a high dynamic range, we notice that the high sensor noise present in the input images is not fully corrected in the dark areas of the output HDR images. Moreover, the motion blur introduced in the dataset produces less sharp characteris \begin{table} \begin{tabular}{l|r r r r r r r} \hline \hline Method & \(\mu\)-PSNR & PU-PSNR & PSNR & \(\mu\)-SSIM & PU-SSIM & SSIM & HDR-VDP2 \\ \hline Sen et al.[24] & 40.80 & 32.47 & 38.11 & 0.9808 & 0.9775 & 0.9721 & 59.38 \\ Hu et al.[6] & 35.79 & \(-\) & 30.76 & 0.9717 & \(-\) & 0.9503 & 57.05 \\ Kalantari et al.[7] & 42.67 & 33.82 & 41.23 & 0.9888 & 0.9832 & 0.9846 & 65.05 \\ DeepHDR[28] & 41.65 & 31.36 & 40.88 & 0.9860 & 0.9815 & 0.9858 & 64.90 \\ NHDRRNet[32] & 42.41 & \(-\) & \(-\) & 0.9887 & \(-\) & \(-\) & 61.21 \\ AHDRNet[30] & 43.61 & 33.94 & 41.03 & 0.9900 & 0.9855 & 0.9702 & 64.61 \\ HDRGAN[18] & 43.92 & 34.04 & 41.57 & 0.9905 & 0.9851 & 0.9865 & 65.45 \\ CEN-HDR(our) & 43.05 & 33.23 & 40.53 & 0.9908 & 0.9821 & 0.9856 & 64.34 \\ \hline \hline \end{tabular} \end{table} Table 2: Quantitative comparison with lightweight state-of-the-art methods on the Kalantari2017[7] test samples. PSNR and SSIM are calculated in the linear domain while \(\mu\)-PSNR and \(\mu\)-SSIM are calculated after \(\mu\)-law tone mapping (Eq. 12). For compared methods, the results are from [18]. PU-PSNR and PU-SSIM are calculated applying the encoding function proposed in [16]. In Table 6 we study the effect of the attention module on the performance of proposed HDR deghosting architecture. SCRAM-C and SCRAM-S respectively correspond to the SCRAM module composed only of the channel attention branch and the spatial attention branch. The proposed SCRAM allows to achieve a similar quality as the spatial attention module proposed in AHDRNet[28], while Table 5 shows that the inference cost of SCRAM is lower. ### Efficiency comparison As we want to propose an efficient HDR generation method, in Table 3 we compare the computation cost and the inference time of our network with state-of-the-art HDR networks that achieve similar performance on quality metrics. The number of operations and parameters are measured using the script provided by the NTIRE[20] challenge. To evaluate runtimes, all the compared networks are executed on the Neural Processing Unit (NPU) of a MacBook Pro (2021) powered with an M1 chip. The time shown is the average for 500 inference runs after a warm-up of 50 runs. The input size is set to \(1280\times 720\) pixels. The gamma-projection of LDR inputs (Eq. 1) and the tone mapping of the HDR output are included in the inference time measurement. Note that the background alignment of inputs frame using homography for [28] is not included. Table 4 compares the number of parameters and operations of the proposed CEN-HDR solution with recent efficient methods [9, 33, 36]. The input size is set to \(1900\times 1060\) pixels corresponding to the size of the inputs from the dataset proposed by [20]. For compared methods [9, 33, 36] the measurements are provided by the NTIRE[20] challenge. We could not compare the inference time of the CEN-HDR solution with these three architectures as they were recently proposed and their implementation is not yet available. Fig. 5 compares the trade-off between fidelity to the ground truth label and runtime of the proposed CEN-HDR solution with other HDR merging methods. \begin{table} \begin{tabular}{l|r r r r} \hline \hline Method & Num. of params. & Num. of op. (GMAccs) & Runtime(s) & FPS \\ \hline DeepHDR[28] & 14618755 & 843.16 & 0.1075 & 9.30 \\ AHDRNet[30] & 1441283 & 1334.95 & 0.4571 & 2.18 \\ NHDRNet[32] & 7672649 & 166.11 & 0.0431 & 23.20 \\ HDR-GAN[18] & 2631011 & 479.78 & 0.2414 & 4.14 \\ CEN-HDR(our) & 282883 & 78.36 & 0.0277 & 36.38 \\ \hline \hline \end{tabular} \end{table} Table 3: Inference cost comparison of the proposed CEN-HDR solution against state-of-the-art lightweight deep learning-based methods. The number of operations and parameters are measured using the script provided by the NTIRE[20] challenge. To measure the inference time, all the compared networks are executed on an M1 NPU. The input size is set to \(1280\times 720\) pixels. The X-axis represents the mean runtime using an M1 NPU with an input size of 1280x720 pixels. The Y-axis is the fidelity score on the test images from [7] dataset. The best solutions tend to be in the upper left corner. The radius of circles represents the number of operations. Our solution is shown as the best solution for real-time HDR merging with a high-fidelity score. ## 6 Conclusions In this paper, we propose CEN-HDR, a novel computationally efficient HDR merging network able to correct the ghost effect caused by large object motions in the scene and camera motion. The proposed lightweight network architecture effectively succeeds in generating real-time HDR images with a dynamic range close to that of the original scene. By integrating the knowledge distillation methods in our training scheme, we demonstrate that the majority of the representation capabilities of a large HDR merging network can be transferred into a lighter network, opening the door to real-time HDR embedded systems. \begin{table} \begin{tabular}{l|r r} \hline \hline Method & Num. of params. & Num. of op. (GMAccs) \\ \hline GSANet[9] & 80650 & 199.39 \\ Yan et al.[33] & 18899000 & 156.12 \\ Yu et al.[36] & 1013250 & 199.88 \\ CEN-HDR(our) & 282883 & 128.78 \\ \hline \hline \end{tabular} \end{table} Table 4: Inference cost comparison of the proposed CEN-HDR solution versus recent efficient merging networks. The number of operations and parameters for [9, 33, 36] and our solution are computed following the method described in [20].The input size is set to \(1900\times 1060\) pixels. \begin{table} \begin{tabular}{l|r r|r r r} \hline \hline Method & \multicolumn{2}{c|}{Attention type} & params. & GMAccs & Runtime(s) \\ \cline{2-5} & Spatial & Channel & & & \\ \hline AHDRNet attention [30] & ✓ & & 55392 & 20.772 & 0.0085 \\ EPSANet[38] & ✓ & & 42560 & 15.768 & 0.0111 \\ SK attention [10] & & ✓ & 125984 & 43.104 & 0.0155 \\ Double attention[1] & ✓ & ✓ & 33216 & 12.456 & 0.0101 \\ CBAM[27] & ✓ & ✓ & 22689 & 7.525 & 0.0734 \\ BAM[19] & ✓ & ✓ & 17348 & 5.008 & 0.0060 \\ \hline \hline \end{tabular} \end{table} Table 5: Inference cost comparison of attention modules. Spatial and Channel attention modules are studied by feeding a tensor of size \((1,\frac{H}{4},\frac{W}{4})\) corresponding to the concatenation of the reference and non-reference tensors after the encoding step. \begin{table} \begin{tabular}{l|c c c c} \hline \hline Method & \(\mu\)-PSNR & PSNR & \(\mu\)-SSIM & SSIM \\ \hline Without attention module & 42.12 & 39.95 & 0.9850 & 0.9823 \\ AHDRNet[30] attention module & 42.94 & 40.49 & 0.9903 & 0.9852 \\ SCRAM-C & 42.32 & 40.14 & 0.9854 & 0.9829 \\ SCRAM-S & 42.89 & 40.41 & 0.9884 & 0.9835 \\ SCRAM & 43.05 & 40.53 & 0.9908 & 0.9856 \\ \hline \hline \end{tabular} \end{table} Table 6: Effect of the attention module on the performance of proposed HDR deghosting network. SCRAM-C and SCRAM-S respectively correspond to the SCRAM module composed only of the channel attention branch and the spatial attention branch. Figure 6: Qualitative results of the proposed CEN-HDR solution on samples from the NTIRE[20] challenge dataset. The ground truth images are not provided. We notice that the high sensor noise present in the input images is not fully corrected in the dark areas of the output HDR images. Moreover, the motion blur introduced in the dataset produces less sharp characteristics in the output image.
2310.18735
Curriculum Learning for Graph Neural Networks: Which Edges Should We Learn First
Graph Neural Networks (GNNs) have achieved great success in representing data with dependencies by recursively propagating and aggregating messages along the edges. However, edges in real-world graphs often have varying degrees of difficulty, and some edges may even be noisy to the downstream tasks. Therefore, existing GNNs may lead to suboptimal learned representations because they usually treat every edge in the graph equally. On the other hand, Curriculum Learning (CL), which mimics the human learning principle of learning data samples in a meaningful order, has been shown to be effective in improving the generalization ability and robustness of representation learners by gradually proceeding from easy to more difficult samples during training. Unfortunately, existing CL strategies are designed for independent data samples and cannot trivially generalize to handle data dependencies. To address these issues, we propose a novel CL strategy to gradually incorporate more edges into training according to their difficulty from easy to hard, where the degree of difficulty is measured by how well the edges are expected given the model training status. We demonstrate the strength of our proposed method in improving the generalization ability and robustness of learned representations through extensive experiments on nine synthetic datasets and nine real-world datasets. The code for our proposed method is available at https://github.com/rollingstonezz/Curriculum_learning_for_GNNs.
Zheng Zhang, Junxiang Wang, Liang Zhao
2023-10-28T15:35:34Z
http://arxiv.org/abs/2310.18735v1
# Curriculum Learning for Graph Neural Networks: Which Edges Should We Learn First ###### Abstract Graph Neural Networks (GNNs) have achieved great success in representing data with dependencies by recursively propagating and aggregating messages along the edges. However, edges in real-world graphs often have varying degrees of difficulty, and some edges may even be noisy to the downstream tasks. Therefore, existing GNNs may lead to suboptimal learned representations because they usually treat every edge in the graph equally. On the other hand, Curriculum Learning (CL), which mimics the human learning principle of learning data samples in a meaningful order, has been shown to be effective in improving the generalization ability and robustness of representation learners by gradually proceeding from easy to more difficult samples during training. Unfortunately, existing CL strategies are designed for independent data samples and cannot trivially generalize to handle data dependencies. To address these issues, we propose a novel CL strategy to gradually incorporate more edges into training according to their difficulty from easy to hard, where the degree of difficulty is measured by how well the edges are expected given the model training status. We demonstrate the strength of our proposed method in improving the generalization ability and robustness of learned representations through extensive experiments on nine synthetic datasets and nine real-world datasets. The code for our proposed method is available at [https://github.com/rollingstonezz/Curriculum_learning_for_GNNs](https://github.com/rollingstonezz/Curriculum_learning_for_GNNs). ## 1 Introduction Inspired by cognitive science studies [8; 33] that humans can benefit from the sequence of learning basic (easy) concepts first and advanced (hard) concepts later, curriculum learning (CL) [2] suggests training a machine learning model with easy data samples first and then gradually introducing more hard samples into the model according to a designed pace, where the difficulty of samples can usually be measured by their training loss [25]. Many previous studies have shown that this easy-to-hard learning strategy can effectively improve the generalization ability of the model [2; 19; 15; 11; 35; 46], and some studies [19; 15; 11] have shown that CL strategies can also increase the robustness of the learned model against noisy training samples. An intuitive explanation is that in CL settings noisy data samples correspond to harder samples, and CL learner spends less time with the harder (noisy) samples to achieve better generalization performance and robustness. Although CL strategies have achieved great success in many fields such as computer vision and natural language processing, existing methods are designed for independent data (such as images) while designing effective CL methods for data with dependencies has been largely underexplored. For example, in a citation network, two researchers with highly related research topics (e.g. machine learning and data mining) are more likely to collaborate with each other, while the reason behind a collaboration of two researchers with less related research topics (e.g. computer architecture and social science) might be more difficult to understand. Prediction on one sample impacts that of another, forming a graph structure that encompasses all samples connected by their dependencies. There are many machine learning techniques for such graph-structured data, ranging from traditional models like conditional random field [36], graph kernels [37], to modern deep models like GNNs [29; 30; 52; 38; 49; 12; 53; 42]. However, traditional CL strategies are not designed to handle the curriculum of the dependencies between nodes in graph data, which are insufficient. Handling graph-structured data require not only considering the difficulty in individual samples, but also the difficulty of their dependencies to determine how to gradually composite correlated samples for learning. As previous CL strategies indicated that an easy-to-hard learning sequence on data samples can improve the generalization and robustness performance, an intuitive question is whether a similar strategy on data dependencies that iteratively involves easy-to-hard edges in learning can also benefit. Unfortunately, there exists no trivial way to directly generalize existing CL strategies on independent data to handle data dependencies due to several unique challenges: (1) **Difficulty in quantifying edge selection criteria**. Existing CL studies on independent data often use supervised computable metrics (e.g. training loss) to quantify sample difficulty, but how to quantify the difficulties of understanding the dependencies between data samples which has no supervision is challenging. (2) **Difficulty in designing an appropriate curriculum to gradually involve edges**. Similar to the human learning process, the model should ideally retain a certain degree of freedom to adjust the pacing of including edges according to its own learning status. As existing CL methods for graph data typically use fixed pacing function to involve samples, they can not provide this flexibility. Designing an adaptive pacing function for handling graph data is difficult since it requires joint optimization of both supervised learning tasks on nodes and the number of chosen edges. (3) **Difficulty in ensuring convergence and a numerical steady process for CL in graphs**. Discrete changes in the number of edges can cause drift in the optimal model parameters between training iterations. How to guarantee a numerically stable learning process for CL on edges is challenging. In order to address the aforementioned challenges, in this paper, we propose a novel CL algorithm named **R**elational **C**urriculum **L**earning (**RCL**) to improve the generalization ability and robustness of representation learners on data with dependencies. To address the first challenge, we propose an approach to select the edges by quantifying their corresponding difficulties in a self-supervised learning manner. Specifically, for each training iteration, we choose \(K\) easiest edges whose corresponding relations are most well-expected by the current model. Second, to design an appropriate learning pace for gradually involving more edges in training, we present the learning process as a concise optimization model, which automatically lets the model gradually increase the number \(K\) to involve more edges in training according to its own status. Third, to ensure convergence of optimizing the model, we propose an alternative optimization algorithm with a theoretical convergence guarantee and an edge reweighting scheme to smooth the graph structure transition. Finally, we demonstrate the superior performance of RCL compared to state-of-the-art comparison methods through extensive experiments on both synthetic and real-world datasets. ## 2 Related Works Curriculum Learning (CL).Bengio et al.[2] pioneered the concept of Curriculum Learning (CL) within the machine learning domain, aiming to improve model performance by gradually including easy to hard samples in training the model. Self-paced learning [25] measures the difficulty of samples by their training loss, which addressed the issue in previous works that difficulties of samples are generated by prior heuristic rules. Therefore, the model can adjust the curriculum of samples according to its own training status. Following works [18; 17; 55] further proposed many supervised measurement metrics for determining curriculums, for example, the diversity of samples [17] or the consistency of model predictions [55]. Meanwhile, many empirical and theoretical studies were proposed to explain why CL could lead to generalization improvement from different perspectives. For example, studies such as MentorNet [19] and Co-teaching [15] empirically found that utilizing CL strategy can achieve better generalization performance when the given training data is noisy. [11] provided theoretical explanations on the denoising mechanism that CL learners waste less time with the noisy samples as they are considered harder samples. Some studies [2; 35; 46; 13; 24] also realized that CL can help accelerate the optimization process of non-convex objectives and improve the speed of convergence in the early stages of training. Despite great success, most of the existing designed CL strategies are for independent data such as images, and there is little work on generalizing CL strategies to handle samples with dependencies. Few existing attempts on graph-structured data [26; 21; 28], such as [44; 5; 45; 28], simply treat nodes as independent samples and then apply CL strategies on independent data, which ignore the fundamental and unique dependency information carried by the structure in data, and thus can not well handle the correlation between data samples. Furthermore, these models are mostly based on heuristic-based sample selection strategies [5; 45; 28], which largely limit the generalizability of these methods. Graph structure learning.Another stream of existing studies that are related to our work is _graph structure learning_. Recent studies have shown that GNN models are vulnerable to adversarial attacks on graph structure [7; 48]. In order to address this issue, studies in _graph structure learning_ usually aim to jointly learn an optimized graph structure and corresponding graph representations. Existing works [9; 4; 20; 54; 31] typically consider the hypothesis that the intrinsic graph structure should be sparse or low rank from the original input graph by pruning "irrelevant" edges. Thus, they typically use pre-deterministic methods [7; 56; 9] to preprocess graph structure such as singular value decomposition (SVD), or dynamically remove "redundant" edges according to the downstream task performance on the current sparsified structure [4; 20; 31]. However, modifying the graph topology will inevitably lose potentially useful information lying in the removed edges. More importantly, the modified graph structure is usually optimized for maximizing the performance on the training set, which can easily lead to overfitting issues. ## 3 Preliminaries Graph neural networks (GNNs) are a class of methods that have shown promising progress in representing structured data in which data samples are correlated with each other. Typically, the data samples are treated as nodes while their dependencies are treated as edges in the constructed graph. Formally, we denote a graph as \(G=(\mathcal{V},\mathcal{E})\), where \(\mathcal{V}=\{v_{1},v_{2},\ldots,v_{N}\}\) is a set of nodes that \(N=|\mathcal{V}|\) denotes the number of nodes in the graph and \(\mathcal{E}\subseteq\mathcal{V}\times\mathcal{V}\) is the set of edges. We also let \(\mathbf{X}\in\mathbb{R}^{N\times b}\) denote the node attribute matrix and let \(\mathbf{A}\in\mathbb{R}^{N\times N}\) represent the adjacency matrix. Specifically, \(A_{ij}=1\) denotes that there is an edge connecting nodes \(v_{i}\) and \(v_{j}\in\mathcal{V}\), otherwise \(A_{ij}=0\). A GNN model \(f\) maps the node feature matrix \(\mathbf{X}\) associated with the adjacency matrix \(\mathbf{A}\) to the model predictions \(\hat{\mathbf{y}}=f(\mathbf{X},\mathbf{A})\), and get the loss \(L_{\mathrm{GNN}}=L(\hat{\mathbf{y}},\mathbf{y})\), where \(L\) is the objective function and \(\mathbf{y}\) is the ground-truth label of nodes. The loss on one node \(v_{i}\) is denoted as \(l_{i}=L(\hat{y_{i}},y_{i})\). ## 4 Methodology As previous CL methods have shown that an easy-to-hard learning sequence of independent data samples can improve the generalization ability and robustness of the representation learner, the goal of this paper is to develop an effective CL method on data with dependencies, which is extremely difficult due to several unique challenges: (1) Difficulty in designing a feasible principle to select Figure 1: The overall framework of RCL. (a) The _Incremental Edge Selection_ module first extracts the latent node embedding by the GNN model given the current training structure, then jointly learns the node prediction label \(\mathbf{y}\) and reconstructs the input structure by a decoder. A small residual error on an edge indicates the corresponding dependency is well expected and thus can be added to the refined structure for the next iteration. (b) The iterative learning process of RCL. The model starts with an empty structure and gradually includes more edges until the training structure converges to the input structure. edges by properly quantifying their difficulties. (2) Difficulty in designing an appropriate pace of curriculum to gradually involve more edges in training based on model status. (3) Difficulty in ensuring convergence and a numerical steady process for optimizing the CL model. In order to address the above challenges, we propose a novel CL method named **R**elational **C**urriculum **L**earning (**RCL**). The sequence, which gradually includes edges from easy to hard, is called _curriculum_ and learned in different grown-up stages of training. In order to address the first challenge, we propose a self-supervised module _Incremental Edge Selection (IES)_, which is shown in Figure 1(a), to select the \(K\) easiest edges at each training iteration that are mostly expected by the current model. The details are elaborated in Section 4.1. To address the second challenge, we present a joint optimization framework to automatically increase the number of selected edges \(K\) given its own training status. The framework is elaborated in Figure 1(b) and details can be found in Section 4.2. Finally, to ensure convergence of optimization and steady the numerical process, we propose an EM-style alternative optimization algorithm with a theoretical convergence guarantee in Section 4.2 Algorithm 1 and an edge reweighting scheme to smooth the discrete edge incrementing process in Section 4.3. ### Incremental Edge Selection by Quantifying Difficulties of Sample Dependencies Here we propose a novel way to select edges by first quantifying their difficulty levels. Existing works on independent data typically use supervised metrics such as training loss of samples to quantify their difficulty level, but there exists no supervised metrics on edges. To address this issue, we propose a self-supervised module _Incremental Edge Selection (IES)_. We first quantify the difficulty of edges by measuring how well the edges are expected from the currently learned embeddings of their connected nodes. Then the most well-expected edges are selected as the easiest edges for the next iteration of training. As shown in Figure 1(a), given the currently selected edges at iteration \(t\), we first feed them to the GNN model to extract the latent node embeddings. Then we restore the latent node embeddings to the original graph structure through a decoder, which is called the reconstruction of the original graph structure. The residual graph \(\mathbf{R}\), which is defined as the degree of mismatch between the original adjacency matrix \(\mathbf{A}\) and the reconstructed adjacency matrix \(\tilde{\mathbf{A}}^{(t)}\), can be considered a strong indicator for describing how well the edges are expected by the current model. Specifically, a smaller residual error indicates a higher probability of being a well-expected edge. With the developed self-supervised method to measure the difficulties of edges, here we formulate the key learning paradigm of selecting the top \(K\) easiest edges. To obtain the training adjacency matrix \(\mathbf{A}^{(t)}\) that will be fed into the GNN model \(f^{(t)}\), we introduce a learnable binary mask matrix \(\mathbf{S}\) with each element \(\mathbf{S}_{ij}\in\{0,1\}\). Thus, the training adjacency matrix at iteration \(t\) can be represented as \(\mathbf{A}^{(t)}=\mathbf{S}^{(t)}\odot\mathbf{A}\). To filter out the edges with \(K\) smallest residual error, we penalize the summarized residual errors over the selected edges, which can be represented as \(\sum_{i,j}\mathbf{S}_{ij}\mathbf{R}_{ij}\). Therefore, the learning objective can be presented as follows: \[\begin{split}&\underset{\mathbf{w}}{\min}L_{\mathrm{GNN}}+\beta \sum_{i,j}\mathbf{S}_{ij}\mathbf{R}_{ij},\\ & s.t.\left\|\mathbf{S}\right\|_{1}\geq K,\end{split} \tag{1}\] where the first term \(L_{\mathrm{GNN}}=L(f(\mathbf{X},\mathbf{A}^{(t)};\mathbf{w}),\mathbf{y})\) is the node-level predictive loss, e.g. cross-entropy loss for the node classification task. The second term \(\sum_{i,j}\mathbf{S}_{ij}\mathbf{R}_{ij}\) aims at penalizing the residual errors over the edges selected by the mask matrix \(\mathbf{S}\). \(\beta\) is a hyperparameter to tune the balance between terms. The constraint is to guarantee only the most \(K\) well-expected edges are selected. More concretely, the value of a residual edge \(\tilde{\mathbf{A}}^{(t)}_{ij}\in[0,1]\) can be computed by a non-parametric kernel function \(\kappa(\mathbf{z}^{(t)}_{i},\mathbf{z}^{(t)}_{j})\), e.g. the inner product kernel. Then the residual error \(\mathbf{R}_{ij}\) between the input structure and the reconstructed structure can be defined as \(\left\|\tilde{\mathbf{A}}^{(t)}_{ij}-\mathbf{A}_{ij}\right\|\), where \(\left\|\cdot\right\|\) is commonly chosen to be the squared \(\ell_{2}\)-norm. ### Automatically Control the Pace of Increasing Edges In order to dynamically include more edges into training, an intuitive way is to iteratively increase the value of \(K\) in Equation 1 to allow more edges to be selected. However, it is difficult to determine an appropriate value of \(K\) with respect to the training status of the model. Besides, directly solving Equation 1 is difficult since \(\mathbf{S}\) is a binary matrix where each element \(\mathbf{S}_{ij}\in\{0,1\}\), optimizing \(\mathbf{S}\) would require solving a discrete constraint program at each iteration. To address this issue, we first relax the problem into continuous optimization so that each \(\mathbf{S}_{ij}\) can be allowed to take any value in the interval \([0,1]\). Note that the inequality \(||\mathbf{S}||_{1}\geq K\) in Eqn. 1 is equivalent to the equality \(||\mathbf{S}||_{1}=K\). This is because the second term in the loss function would always encourage fewer selected edges by the mask matrix \(\mathbf{S}\), as all values in the residual error matrix \(\mathbf{R}\) and mask matrix \(\mathbf{S}\) are nonnegative. Given this, we can incorporate the equality constraint as a Lagrange multiplier and rewrite the loss function as \(\mathcal{L}=L_{GNN}+\beta\sum_{i,j}\mathbf{S}_{ij}\mathbf{R}_{ij}-\lambda(|| \mathbf{S}||_{1}-K)\). Considering that \(K\) remains constant, the optimization of the loss function can be equivalently framed by substituting the given constraint with a regularization term denoted as \(g(\mathbf{S};\lambda)\). As such, the overall loss function can be reformulated as: \[\min_{\mathbf{w},\mathbf{S}}L_{\mathrm{GNN}}+\beta\sum_{i,j}\mathbf{S}_{ij} \mathbf{R}_{ij}+g(\mathbf{S};\lambda), \tag{2}\] where \(g(\mathbf{S};\lambda)=\lambda\left\|\mathbf{S}-\mathbf{A}\right\|\) and \(\left\|\cdot\right\|\) is commonly chosen to be the squared \(\ell_{2}\)-norm. Since the training adjacency matrix \(\mathbf{A}^{(t)}=\mathbf{S}^{(t)}\odot\mathbf{A}\), as \(\lambda\rightarrow\infty\), more edges in the input structure are included until the training adjacency matrix \(\mathbf{A}^{(t)}\) converges to the input adjacency matrix \(\mathbf{A}\). Specifically, the regularization term \(g(\mathbf{S};\lambda)\) controls the learning scheme by the _age parameter_\(\lambda\), where \(\lambda=\lambda(t)\) grows with the number of iterations. By monotonously increasing the value of \(\lambda\), the regularization term \(g(\mathbf{S};\lambda)\) will push the mask matrix gradually converge to the input adjacency matrix \(\mathbf{A}\), resulting in more edges automatically involved in the training structure. **Optimization of learning objective.** In optimizing the objective function in Equation 2, we need to jointly optimize parameter \(\mathbf{w}\) for GNN model \(f\) and the mask matrix \(\mathbf{S}\). To tackle this, we introduce an EM-style optimization scheme (detailed in Algorithm 1) that iteratively updates both. The algorithm uses the node feature matrix \(\mathbf{X}\), the original adjacency matrix \(\mathbf{A}\), a step size \(\mu\) to control the age parameter \(\lambda\) increase rate, and a hyperparameter \(\gamma\) for regularization adjustments. Post initialization of \(\mathbf{w}\) and \(\mathbf{S}\), it alternates between: optimizing GNN model \(f\) (Step 3), extracting latent node embeddings and reconstructing the adjacency matrix (Steps 4 & 5), refining the mask matrix using the reconstructed matrix and regularization, and results in more edges are gradually involved (Step 6), updating the training adjacency matrix (Step 7), and incrementing \(\lambda\) when the training matrix \(\mathbf{A}^{(t)}\) differs from input matrix \(\mathbf{A}\), incorporating more edges in the next iteration. **Theorem 4.1**.: _We have the following convergence guarantees for Algorithm 1: \(\bullet\) Avoidance of Saddle Points. If the second derivatives of \(L(f(\mathbf{X},\mathbf{A}^{(t)};\mathbf{w}),\mathbf{y})\) and \(g(\mathbf{S};\lambda)\) are continuous, then for sufficiently large \(\gamma\), any bounded sequence \((\mathbf{w}^{(t)},\mathbf{S}^{(t)})\) generated by Algorithm 1 with random initializations will not converge to a strict saddle point of \(F\) almost surely. \(\bullet\) Second Order Convergence. If the second derivatives of \(L(f(\mathbf{X},\mathbf{A}^{(t)};\mathbf{w}),\mathbf{y})\) and \(g(\mathbf{S};\lambda)\) are continuous, and \(L(f(\mathbf{X},\mathbf{A}^{(t)};\mathbf{w}),\mathbf{y})\) and \(g(\mathbf{S};\lambda)\) satisfy the Kurdyka-Lojasiewicz (KL) property [41], then for sufficiently large \(\gamma\), any bounded sequence \((\mathbf{w}^{(t)},\mathbf{S}^{(t)})\) generated by Algorithm 1 with random initialization will almost surely converge to a second-order stationary point of \(F\)._ The detailed proof can be found in Appendix B. ### Smooth Structure Transition by Edge Reweighting Note that in the Algorithm 1, the optimization process requires iteratively updating the parameters \(\mathbf{w}\) of the GNN model \(f\) and current adjacency matrix \(\mathbf{A}^{(t)}\), where \(\mathbf{A}^{(t)}\) varies discretely between iterations. However, GNN models mostly work in a message-passing fashion, which computes node representations by iteratively aggregating information along edges from neighboring nodes. Discretely modifying the number of edges will result in a great drift of the optimal model parameters between iterations. In Appendix Figure, we demonstrate that a shift in the optimal parameters of the GNN results in a spike in the training loss. Therefore, it can increase the difficulty of finding optimal parameters and even hurt the generalization ability of the model in some cases. Besides the numerical problem caused by discretely increasing the number of edges, another issue raised by the CL strategy in Section 4.1 is the trustworthiness of the estimated edge difficulty, which is inferred by the residual error on the edges. Although the residual error can reflect how well edges are expected in the ideal case, the quality of the learned latent node embeddings may affect the validity of this metric and compromise the quality of the designed curriculum by the CL strategy. To address both issues, we propose a novel edge reweighting scheme to (1) smooth the transition of the training structure between iterations, and (2) reduce the weight of edges that connect nodes with low-confidence latent embeddings. Formally, we use a smoothed version of structure \(\bar{\mathbf{A}}^{(t)}\) to substitute \(\mathbf{A}^{(t)}\) for training the GNN model \(f\) in step 3 of Algorithm 1, where the mapping from \(\mathbf{A}^{(t)}\) to \(\bar{\mathbf{A}}^{(t)}\) can be represented as: \[\bar{\mathbf{A}}^{(t)}_{ij}=\pi^{(t)}_{ij}\mathbf{A}^{(t)}_{ij}, \tag{3}\] where \(\pi^{(t)}_{ij}\) is the weight imposed on edge \(e_{ij}\) at iteration \(t\). \(\pi^{(t)}_{ij}\) is calculated by considering the counted occurrences of edge \(e_{ij}\) until the iteration \(t\) and the confidence of the latent embedding for the connected pair of nodes \(v_{i}\) and \(v_{j}\): \[\pi^{(t)}_{ij}=\psi(e_{ij})\rho(v_{i})\rho(v_{j}), \tag{4}\] where \(\psi\) is a function that reflects the number of edge occurrences and \(\rho\) is a function to reflect the degree of confidence for the learned latent node embedding. The details of these two functions are described as follow. **Smooth the transition of the training structure between iterations.** In order to obtain a smooth transition of the training structure between iterations, we take the learned curriculum of selected edges into consideration. Formally, we model \(\psi\) by a smooth function of the edge selected occurrences compared to the model iteration occurrences before the current iteration: \[\psi(e_{ij})=t(e_{ij})/t, \tag{5}\] where \(t\) is the number of current iterations and \(t(e_{ij})\) represents the counting number of selecting edge \(e_{ij}\). Therefore, we transform the original discretely changing training structure into a smoothly changing one by taking the historical edge selection curriculum into consideration. **Reduce the influence of nodes with low confidence latent embeddings.** As introduced in our Algorithm 1 line 6, the estimated structure \(\tilde{A}\) is inferred from the latent embedding \(\mathbf{Z}\), which is extracted from the trained GNN model \(f\). Such estimated latent embedding may possibly differ from the true underlying embedding, which results in the inaccurately reconstructed structure around the node. In order to alleviate this issue, we model the function \(\rho\) by the training loss on nodes, which indicates the confidence of their learned latent embeddings. This idea is similar to previous CL strategies on inferring the difficulty of data samples by their supervised training loss. Specifically, a larger training loss indicates a low confident latent node embedding. Mathematically, the weights \(\rho(v_{i})\) on node \(v_{i}\) can be represented as a distribution of their training loss: \[\rho(v_{i})\sim e^{-l_{i}} \tag{6}\] where \(l_{i}\) is the training loss on node \(v_{i}\). Therefore, a node with a larger training loss will result in a smaller value of \(\rho(v_{i})\), which reduces the weight of its connecting edges. ## 5 Experiments In this section, the experimental settings are introduced first in Section 5.1, then the performance of the proposed method on both synthetic and real-world datasets are presented in Section 5.2. We further present the robustness test on our CL method against topological structure noise in Section 5.3. We verify the effectiveness of framework components through ablation studies in Section 5.4. Intuitive visualizations of the edge selection curriculum are shown in Section 5.5. In addition, we measure the the parameter sensitivity in Appendix A.2 and running time analysis in Appendix A.5 due to the space limit. ### Experimental Settings Synthetic datasets.To evaluate the effectiveness of our proposed method on datasets with ground-truth difficulty labels on edges, we follow previous studies [22; 1] to generate a set of synthetic datasets, where the formation probability of an edge is designed to reflect its likelihood to positively contribute to the node classification job, which indicates its ground-truth difficulty level. Specifically, the nodes in a generated graph are divided into 10 equally sized node classes \(1,2,\ldots,10\), and the node features are sampled from overlapping multi-Gaussian distributions. Each generated graph is associated with a _homophily coefficient (homo)_ which indicates the probability of a node forming an edge to another node with the same label. For the rest edges that are formed between nodes with different labels, the probability of forming an edge is inversely proportional to the distances between their labels. Nodes with close classes are more likely to be connected since the formation probability decreases with the distance of the node label, and connections from nodes with close classes can increase the likelihood of accurately classifying a node due to the homophily property of the designed node classification task. Therefore, an edge with a high formation probability indicates a higher chance to positively contribute to the node classification task because it connects a node with a close class, and thus can be considered an easy edge. We vary the value of _homo_ to generate nine graphs in total. More details and visualization about the synthetic dataset can be found in Appendix A.1. Real-world datasets.To further evaluate the performance of our proposed method in real-world scenarios, nine benchmark real-world attributed network datasets, including four citation network datasets Cora, Citeseer, Pubmed [51] and ogbn-arxiv [16], two coauthor network datasets CS and Physics [34], two Amazon co-purchase network datasets Photo and Computers [34], and one protein interation network ogbn-proteins [16]. We follow the data splits from [3] on citation networks and use a 5-fold cross-validation setting on coauthor and Amazon co-purchase networks. All datasets are publicly available from Pytorch-geometric library [10] and Open Graph Benchmark (OGB) [16], where basic statistics are reported in Table 2. Comparison methods.We incorporate three commonly used GNN models, including GCN [23], GraphSAGE [14], and GIN [50], as the baseline model and also the backbone model for RCL. In addition to evaluating our proposed method against the baseline GNNs, we further leverage two categories of state-of-the-art comparison methods in the experiments: (1) We incorporate four graph structure learning methods GNNSVD [9], ProGNN [20], NeuralSparse [54], and PTDNet [31]; (2) We further compare with a curriculum learning method named CLNode [45] which gradually select nodes in the order of the difficulties defined by a heuristic-based strategy. More details about the comparison methods can be found in Appendix A.1. \begin{table} \begin{tabular}{c|c c c c c c c c c} \hline \hline Home ratio & **0.1** & **0.2** & **0.3** & **0.4** & **0.5** & **0.6** & **0.7** & **0.8** & **0.9** \\ \hline GCN & 50.84\(\pm\)1.03 & 56.50\(\pm\)0.50 & 65.17\(\pm\)0.48 & 77.94\(\pm\)0.54 & 87.15\(\pm\)0.44 & 93.27\(\pm\)0.24 & 97.48\(\pm\)0.25 & 99.10\(\pm\)0.17 & 92.93\(\pm\)0.03 \\ GNNSVD & 54.96\(\pm\)0.76 & 58.54\(\pm\)0.56 & 63.06\(\pm\)0.63 & 70.23\(\pm\)0.61 & 80.51\(\pm\)0.51 & 85.02\(\pm\)0.46 & 90.31\(\pm\)0.27 & 94.23\(\pm\)0.22 & 96.74\(\pm\)0.23 \\ PngGNN & 47.87\(\pm\)0.87 & 54.59\(\pm\)0.55 & 63.94\(\pm\)0.44 & 79.66\(\pm\)0.49 & 87.67\(\pm\)0.51 & 93.16\(\pm\)0.34 & 97.06\(\pm\)0.31 & 99.01\(\pm\)0.19 & **99.94\(\pm\)0.03** \\ NeuralSparse & 51.42\(\pm\)1.35 & 57.99\(\pm\)0.69 & 65.10\(\pm\)0.43 & 75.73\(\pm\)0.34 & 87.40\(\pm\)0.29 & 93.54\(\pm\)0.28 & 97.16\(\pm\)0.15 & 99.01\(\pm\)0.22 & 98.93\(\pm\)0.07 \\ PTDNet & 48.21\(\pm\)1.98 & 55.52\(\pm\)0.82 & 65.04\(\pm\)0.94 & 79.37\(\pm\)0.45 & 87.10\(\pm\)0.39 & 94.19\(\pm\)0.18 & 98.61\(\pm\)0.12 & 99.51\(\pm\)0.09 & 98.91\(\pm\)0.05 \\ CLNodes & 50.37\(\pm\)0.73 & 56.64\(\pm\)0.56 & 65.04\(\pm\)0.66 & 77.52\(\pm\)0.48 & 86.85\(\pm\)0.44 & 93.10\(\pm\)0.47 & 97.34\(\pm\)0.25 & 99.02\(\pm\)0.18 & 99.88\(\pm\)0.04 \\ RCL & **57.57\(\pm\)0.43** & **62.06\(\pm\)0.28** & **73.98\(\pm\)0.58** & **84.45\(\pm\)0.75** & **92.69\(\pm\)0.09** & **97.42\(\pm\)0.17** & **96.92\(\pm\)0.05** & **99.98\(\pm\)0.02** & 99.93\(\pm\)0.06 \\ \hline GIN & **48.33\(\pm\)1.39** & 53.35\(\pm\)0.19 & 54.09\(\pm\)0.19 & 75.75\(\pm\)1.10 & 83.51\(\pm\)0.19 & 90.57\(\pm\)0.36 & 97.32\(\pm\)0.18 & 99.95\(\pm\)0.10 & 99.91\(\pm\)0.02 \\ GNNSVD & 43.21\(\pm\)1.60 & 45.68\(\pm\)1.66 & 54.90\(\pm\)1.68 & 62.89\(\pm\)0.79 & 79.76\(\pm\)0.52 & 85.63\(\pm\)0.44 & 93.65\(\pm\)0.39 & 97.24\(\pm\)0.17 & 98.94\(\pm\)0.17 \\ PngGNN & 45.76\(\pm\)1.64 & 29.50\(\pm\)1.67 & 14.07\(\pm\)0.76 & 79.65\(\pm\)0.43 & 87.83\(\pm\)0.17 & 89.96\(\pm\)0.55 & 98.40\(\pm\)0.48 & 95.10\(\pm\)0.12 & 99.78\(\pm\)0.05 \\ NeuralSparse & 50.23\(\pm\)2.05 & 54.12\(\pm\)1.52 & 62.81\(\pm\)0.75 & 76.98\(\pm\)1.17 & 85.14\(\pm\)0.94 & 92.57\(\pm\)0.44 & 98.02\(\pm\)0.20 & 99.61\(\pm\)0.12 & 99.91\(\pm\)0.05 \\ PTDNet & 53.23\(\pm\)2.76 & 56.12\(\pm\)2.03 & 65.81\(\pm\)1.38 & 77.81\(\pm\)1.02 & 86.14\(\pm\)0.45 & 93.21\(\pm\)0.74 & 97.08\(\pm\)0.41 & 97.51\(\pm\)0.18 & **99.91\(\pm\)0.03** \\ CLNodes & 43.56\(\pm\)1.42 & 51.10\(\pm\)1.15 & 62.38\(\pm\)0.78 & 75.83\(\pm\)1.07 & 87.66\(\pm\)0.90 & 94.25\(\pm\)0.44 & 98.06\(\pm\)0.26 & **99.06\(\pm\)0.09** & **99.92\(\pm\)0.03** \\ RCL & **57.63\(\pm\)0.66** & **62.08\(\pm\)1.17** & **71.02\(\pm\)0.61** & **86.01\(\pm\)0.69** & **88.62\(\pm\)0.43** & **94.88\(\pm\)0.36** & 98.19\(\pm\)0.19 & 99.92\(\pm\)0.08 & 99.99\(\pm\)0.04 \\ \hline GraphSAGE & 62.57\(\pm\)0.55 & 63.73\(\pm\)0.43 & 76.17\(\pm\)0.44 & 78.08\(\pm\)0.54 & 75.88\(\pm\)0.51 & 91.42\(\pm\)0.37 & 95.26\(\pm\)0.33 & 97.78\(\pm\)0.16 & 95.20\(\pm\)0.13 \\ GNNSVD & 64.42\(\pm\)0.20 & 66.71\(\pm\)0.39 & 67.12\(\pm\)0.58 & 68.87\(\pm\)0.50 & 77.70\(\pm\)0.65 & 82.86\(\pm\)0.50 & 87.81\(\pm\)0.71 & 91.61\(\pm\)0.55 & 95.01\(\pm\)0.50 \\ PngGNN Initializing graph structure by a pre-trained model.It is worth noting that the model needs an initial training graph structure \(\mathbf{A}^{(0)}\) in the initial stage of training. An intuitive way is that we can initialize the model to work in a purely data-driven scenario that starts only with isolated nodes where no edges exist. However, an instructive initial structure can greatly reduce the search cost and computational burden. Inspired by many previous CL works [46; 13; 19; 55] that incorporate prior knowledge of a pre-trained model into designing curriculum for the current model, we initialize the training structure \(\mathbf{A}^{(0)}\) by a pre-trained vanilla GNN model \(f^{*}\). Specifically, we follow the same steps from line 4 to line 7 in the algorithm 1 to obtain the initial training structure \(\mathbf{A}^{(0)}\) but the latent node embedding is extracted from the pre-trained model \(f^{*}\). Implementation details.We use the baseline model (GCN, GIN, GraphSAGE) as the backbone model for both our RCL method and all comparison methods. For a fair comparison, we require all models follow the same GNN architecture with two convolution layers. For each split, we run each model 10 times to reduce the variance in particular data splits. Test results are according to the best validation results. General training hyperparameters (such as learning rate or the number of training epochs) are equal for all models. ### Effectiveness Results Table 1 presents the node classification results of the synthetic datasets. We report the average accuracy and standard deviation for each model against the _homo_ of generated graphs. From the table, we observe that our proposed method RCL consistently achieves the best or most competitive performance to all the comparison methods over three backbone GNN architectures. Specifically, RCL outperforms the second best method on average by 4.17%, 2.60%, and 1.06% on GCN, GIN, and GraphSAGE backbones, respectively. More importantly, the proposed RCL method performs significantly better than the second best model when the _homo_ of generated graphs is low (\(\leq 0.5\)), on average by 6.55% on GCN, 4.17% on GIN, and 2.93% on GraphSAGE backbones. These demonstrate that our proposed RCL method significantly improves the model's capability of learning an effective representation to downstream tasks especially when the edge difficulties vary largely in the data. We report the experimental results of the real-world datasets in Table 2. The results demonstrate the strength of our proposed method by consistently achieving the best results in all 9 datasets by GCN backbone architecture, all 9 datasets by GraphSAGE backbone architecture, and 8 out of 9 datasets by GIN backbone architecture. Specifically, our proposed method improved the performance of baseline models on average by 1.86%, 2.83%, and 1.62% over GCN, GIN, and GraphSAGE, and outperformed the second best models model on average by 1.37%, 2.49%, and 1.22% over the three backbone models, respectively. The results demonstrate that the proposed RCL method consistently improves the performance of GNN models in real-world scenarios. Our experimental results are statically sound. In 43 out of 48 tasks our method outperforms the second-best performing model with strong statistical significance. Specifically, we have in 30 out of 43 cases with a significance \(p<0.001\), in 8 out of 43 cases with a significance \(p<0.01\), and in 5 \begin{table} \begin{tabular}{c|c c c c c c c c c} \hline \hline & **Cara** & **Citeseer** & **Pubmed** & **CS** & **Pivages** & **Photo** & **Computers** & **aph-arxiv** & **gbng-petrains** \\ \hline \# nodes & 2,708 & 3,327 & 19,717 & 18,333 & 34,933 & 7,650 & 13,752 & 169,343 & 132,534 \\ \# edges & 10,556 & 9,104 & 88,648 & 16,3788 & 405,924 & 238,162 & 49,722 & 1,166,423 & 39,561,525 \\ \# features & 1,433 & 3,703 & 500 & 6,805 & 8,415 & 745 & 767 & 100 & 8 \\ \hline GCN & 85.741\(\pm\)0.42 & 78.93\(\pm\)0.32 & 87.91\(\pm\)0.09 & 93.03\(\pm\)0.32 & 96.55\(\pm\)0.15 & 93.25\(\pm\)0.70 & 88.99\(\pm\)0.40 & 71.74\(\pm\)0.29 & 72.51\(\pm\)0.35 \\ GNNSP & 83.24\(\pm\)10.3 & 78.40\(\pm\)0.87 & 88.81\(\pm\)0.38 & 93.79\(\pm\)0.11 & 96.11\(\pm\)0.13 & 96.83\(\pm\)0.73 & 86.49\(\pm\)0.77 & 67.44\(\pm\)0.51 & 66.92\(\pm\)0.64 \\ PnGNN & 85.66\(\pm\)0.10 & 74.78\(\pm\)0.55 & 87.22\(\pm\)0.33 & 94.04\(\pm\)0.19 & 95.55\(\pm\)0.26 & 92.07\(\pm\)0.67 & 87.22\(\pm\)0.59 & (OOM) & (OM) \\ NeuralSParse & 85.95\(\pm\)0.86 & 74.24\(\pm\)0.48 & 86.33\(\pm\)0.40 & 93.21\(\pm\)0.47 & 95.66\(\pm\)0.30 & 90.57\(\pm\)0.89 & 86.22\(\pm\)0.83 & (OOM) & (OM) \\ PIDNet & 83.84\(\pm\)0.95 & 75.44\(\pm\)0.42 & 88.79\(\pm\)0.08 & 93.04\(\pm\)0.43 & 95.60\(\pm\)0.89 & 89.82\(\pm\)0.87 & 87.52\(\pm\)0.70 & (OMOM) & (OMOM) \\ CLNode & 85.67\(\pm\)0.33 & 78.90\(\pm\)0.57 & 89.50\(\pm\)0.28 & 93.83\(\pm\)0.24 & 95.76\(\pm\)0.16 & 93.19\(\pm\)0.83 & 89.28\(\pm\)0.38 & 70.95\(\pm\)0.18 & 71.40\(\pm\)0.32 \\ RCL & **87.15\(\pm\)0.35** & **74.97\(\pm\)0.35** & **89.97\(\pm\)0.12** & **94.66\(\pm\)0.30** & **97.02\(\pm\)0.33** & **97.02\(\pm\)0.32** & **73.04\(\pm\)0.33** & **73.05\(\pm\)0.36** & **73.19\(\pm\)0.36** \\ \hline GIN & 84.43\(\pm\)0.65 & 74.87\(\pm\)0.87 & 85.72\(\pm\)0.20 & 91.48\(\pm\)0.36 & 95.62\(\pm\)0.30 & 93.02\(\pm\)0.91 & 89.94\(\pm\)1.38 & 69.26\(\pm\)0.34 & 74.51\(\pm\)0.32 \\ GNNSP & 82.23\(\pm\)0.65 & 72.11\(\pm\)0.70 & 83.81\(\pm\)0.15 & 91.40\(\pm\)0.87 & 95.30\(\pm\)0.29 & 94.91\(\pm\)1.11 & 86.26\(\pm\)0.26 & 67.79\(\pm\)0.41 & 76.65\(\pm\)0.43 \\ PnGNN & 85.02\(\pm\)0.61 & **78.12\(\pm\)0.82** & 87.73\(\pm\)0.51 & 97.82\(\pm\)0.51 & (OOM) & (OM) & 92.23\(\pm\)0.67 & 85.41\(\pm\)0.48 & (OMOM) & (OM) \\ NeuralSParse & 84.92\(\pm\)0.58 & 75.48\(\pm\)0.47 & 86.11\(\pm\)0.49 & 89.66\(\pm\)0.82 & 95.05\(\pm\)0.57 & 93.28\(\pm\)0.83 & 87.22\(\pm\)0.54 & (OOM) & (OM) \\ PIDNet & 83.02\(\pm\)0.11 & 75.00\(\pm\)0.74 & 88.80\(\pm\)0.92 & 91.91\(\pm\)0.21 & 95.57\(\pm\)0.40 & 90.70\(\pm\)0.76 & 87.08\(\pm\)0.65 & (OMOM) & (OM) \\ CLNode & 83.52\(\pm\)0.77 & 75.82\(\pm\)0.05 & 88.69\(\pm\)0.61 & 91.71\(\pm\)0.41 & 95.75\(\pm\)0.46 & 92.78\(\pm\)0.90 & 85.93\(\pm\)1.53 & 70.58\(\pm\)0.17 & 73.97\(\pm\)0.31 \\ RCL & **86.41\(\pm\)0.39** & 76.00\(\pm\)0.18 & **89.71\(\pm\)0.29** & **93.92\(\pm\)0.27** & **96.76\(\pm\)0.17** & **93.83\(\pm\)0.51** & **89.76\(\pm\)0.19** & **73.25\(\pm\)0.18** & **73.78\(\pm\)0.12** \\ \hline GraphSAGE & 86.20\(\pm\)0.27 & 77.27\(\pm\)0.25 & 88.50\(\pm\)0.62 & 94.21\(\pm\)0.18 & 93.26\(\pm\)0.14 & 93.28\(\pm\)0.51 & 88.62\(\pm\)0.21 & 71.19\(\ out of 43 cases with a significance \(p<0.05\). Such statistical significance results can demonstrate that our proposed method can consistently perform better than the baseline models in both scenarios. ### Robustness Analysis Against Topological Noise To further examine the robustness of the RCL method on extracting powerful representation from correlated data samples, we follow previous works [20; 31] to randomly inject fake edges into real-world graphs. This adversarial attack can be viewed as adding random noise to the topological structure of graphs. Specifically, we randomly connect \(M\) pairs of previously unlinked nodes in the real-world datasets, where the value of \(M\) varies from 10% to 100% of the original edges. We then train RCL and all the comparison methods on the attacked graph and evaluate the node classification performance. The results are shown in Figure 2, we can observe that RCL shows strong robustness to adversarial structural attacks by consistently outperforming all compared methods on all datasets. Especially, when the proportion of added noisy edges is large (\(>50\%\)), the improvement becomes more significant. For instance, under the extremely noisy ratio at 100%, RCL outperforms the second best model by 4.43% and 2.83% on Cora dataset, and by 6.13%, 3.47% on Citeseer dataset, with GCN and GIN backbone models, respectively. ### Ablation Study To investigate the effectiveness of our proposed model with some simpler heuristics, we deploy a series of ablation analysis. We first train the model with node classification task purely and select the top K expected edges as suggested by the reviewer. Specifically, we follow previous works [43; 45] using two classical selection pacing functions as follows: \[\mathrm{Linear}\colon K_{\mathrm{linear}}(t)=\frac{t}{T}|E|;\ \ \mathrm{Root} \colon K_{\mathrm{root}}(t)=\sqrt{\frac{t}{T}}|E|,\] where \(t\) is the number of current iterations and \(T\) is the number of total iterations, and \(|E|\) is the number of total edges. We name these two variants Curriculum-linear and Curriculum-root, respectively. In addition, we also remove the edge difficulty measurement module and use random selection instead. Specifically, we gradually incorporate more edges into training in random order to verify the effectiveness of the learned curriculum. We name two variants as Random-linear and Random-root with the above two mentioned pacing functions, respectively. In order to further investigate the impact of the proposed components of RCL. We also first consider variants of removing the edge smoothing components mentioned in Section 4.3. Specifically, we Figure 2: Node classification accuracy (\(\%\)) on Cora and Citeseer under random structure attack. The attack edge ratio is computed versus the original number of edges, where \(100\%\) means that the number of inserted edges is equal to the number of original edges. \begin{table} \begin{tabular}{c|c c c c c} \hline \hline & **Synthetic** & **Synthetic2** & **Citeseer** & **CS** & **Computers** \\ \hline Full & **73.98\(\pm\)0.55** & **97.42\(\pm\)0.17** & **79.79\(\pm\)0.55** & **94.66\(\pm\)0.22** & **90.23\(\pm\)0.23** \\ Curriculum-linear & 70.93\(\pm\)0.54 & 95.19\(\pm\)0.19 & 79.04\(\pm\)0.38 & 94.14\(\pm\)0.26 & 89.28\(\pm\)0.21 \\ Curriculum-root & 70.13\(\pm\)0.72 & 95.50\(\pm\)0.18 & 78.72\(\pm\)0.54 & 94.74\(\pm\)0.34 & 89.27\(\pm\)0.15 \\ Random-linear & 58.76\(\pm\)0.46 & 98.70\(\pm\)0.17 & 77.43\(\pm\)0.49 & 29.76\(\pm\)0.14 & 88.76\(\pm\)0.18 \\ Random-root & 61.04\(\pm\)0.20 & 91.04\(\pm\)0.09 & 76.81\(\pm\)0.35 & 92.92\(\pm\)0.15 & 88.81\(\pm\)0.28 \\ w/o edge appearance & 70.70\(\pm\)0.43 & 95.77\(\pm\)0.16 & 77.77\(\pm\)0.65 & 43.99\(\pm\)0.21 & 89.56\(\pm\)0.30 \\ w/o node confidence & 72.38\(\pm\)0.41 & 69.86\(\pm\)0.17 & 78.72\(\pm\)0.72 & 94.34\(\pm\)0.13 & 90.03\(\pm\)0.62 \\ w/o pre-trained model & 72.56\(\pm\)0.69 & 93.89\(\pm\)0.14 & 78.28\(\pm\)0.77 & 94.50\(\pm\)0.14 & 89.00\(\pm\)0.55 \\ \hline \hline \end{tabular} \end{table} Table 3: Ablation study. Here “Full” represents the original method without removing any component. The best-performing method on each dataset is highlighted in bold. consider two variants _w/o EC_ and _w/o NC_, which remove the smoothing function of the edge occurrence ratio and the component to reflect the degree of confidence for the latent node embedding in RCL, respectively. In addition to examining the effectiveness of edge smoothing components, we further consider a variant _w/o pre-trained model_ that avoids using a pre-trained model to initialize model, which is mentioned in Section 5.1, to initialize the training structure by a pre-trained model and instead starts with inferred structure from isolated nodes with no connections. We present the results of two synthetic datasets (_homophily coefficient\(=0.3,0.6\)_) and three real-world datasets in Table 3. We summarize our findings from the above table as below: (i) Our full model consistently outperforms the two variants Curriculum-linear and Curriculum-root by an average of 1.59% on all datasets, suggesting that our pacing module can benefit model training. It is worth noting that these two variants also outperform the baseline vanilla GNN model Vanilla by an average of 1.92%, which supports the assumption that even a simple curriculum learning strategy can still improve model performance. (ii) We observe that the performance of the two variants Random-linear and Random-root on all datasets drops by 3.86% on average compared to the variants Curriculum-linear and Curriculum-root. Such behavior demonstrates the effectiveness of our proposed edge difficulty quantification module by showing that randomly involving edges into training cannot benefit model performance. (iii) We can observe a significant performance drop consistently for all variants that remove the structural smoothing techniques and initialization components. The results validate that all structural smoothing and initialization components can benefit the performance of RCL on the downstream tasks. ### Visualization of Learned Edge Selection Curriculum Besides the effectiveness and robustness of the RCL method on downstream classification results, it is also interesting to verify whether the learned edge selection curriculum satisfies the rule from easy to hard. Since real-world datasets do not have ground-truth labels of difficulty on edges, we conduct visualization experiments on synthetic datasets, where the difficulty of each edge can be indicated by its formation probability. Specifically, we classify edges into three balanced categories according to their difficulty: easy, medium, and hard. Here, we define all homogenous edges that connect nodes with the same class as easy, edges connecting nodes with adjacent classes as medium, and the remaining edges connecting nodes with far away classes as hard. We report the proportion of edges selected for each category during training in Figure 3. We can observe that RCL can effectively select most of the easy edges at the early stage of training, then more easy edges and most medium edges are gradually included during training, and most hard edges are left unselected until the end stage of training. Such edge selection behavior is highly consistent with the core idea of designing a curriculum for edge selection, which verifies that our proposed method can effectively design curriculums to select edges according to their difficulty from easy to hard. ## 6 Conclusion This paper focuses on developing a novel CL method to improve the generalization ability and robustness of GNN models on learning representations of data samples with dependencies. The proposed method **R**elational **C**urriculum **L**earning (**RCL**) effectively addresses the unique challenges in designing CL strategy for handling dependencies. First, a self-supervised learning module is developed to select appropriate edges that are expected by the model. Then an optimization model is presented to iteratively increment the edges according to the model training status and a theoretical guarantee of the convergence on the optimization algorithm is given. Finally, an edge reweighting scheme is proposed to steady the numerical process by smoothing the training structure transition. Extensive experiments on synthetic and real-world datasets demonstrate the strength of RCL in improving the generalization ability and robustness. Figure 3: Visualization of edge selection process during training. ## Acknowledgement This work was supported by the National Science Foundation (NSF) Grant No. 1755850, No. 1841520, No. 2007716, No. 2007976, No. 1942594, No. 1907805, a Jeffress Memorial Trust Award, Amazon Research Award, NVIDIA GPU Grant, and Design Knowledge Company (subcontract number: 10827.002.120.04). The authors acknowledge Emory Computer Science department for providing computational resources and technical support that have contributed to the experimental results reported within this paper.
2301.05031
Explicit Context Integrated Recurrent Neural Network for Sensor Data Applications
The development and progress in sensor, communication and computing technologies have led to data rich environments. In such environments, data can easily be acquired not only from the monitored entities but also from the surroundings where the entity is operating. The additional data that are available from the problem domain, which cannot be used independently for learning models, constitute context. Such context, if taken into account while learning, can potentially improve the performance of predictive models. Typically, the data from various sensors are present in the form of time series. Recurrent Neural Networks (RNNs) are preferred for such data as it can inherently handle temporal context. However, the conventional RNN models such as Elman RNN, Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) in their present form do not provide any mechanism to integrate explicit contexts. In this paper, we propose a Context Integrated RNN (CiRNN) that enables integrating explicit contexts represented in the form of contextual features. In CiRNN, the network weights are influenced by contextual features in such a way that the primary input features which are more relevant to a given context are given more importance. To show the efficacy of CiRNN, we selected an application domain, engine health prognostics, which captures data from various sensors and where contextual information is available. We used the NASA Turbofan Engine Degradation Simulation dataset for estimating Remaining Useful Life (RUL) as it provides contextual information. We compared CiRNN with baseline models as well as the state-of-the-art methods. The experimental results show an improvement of 39% and 87% respectively, over state-of-the art models, when performance is measured with RMSE and score from an asymmetric scoring function. The latter measure is specific to the task of RUL estimation.
Rashmi Dutta Baruah, Mario Muñoz Organero
2023-01-12T13:58:56Z
http://arxiv.org/abs/2301.05031v1
# Explicit Context Integrated Recurrent Neural Network for Sensor Data Applications ###### Abstract The development and progress in sensor, communication and computing technologies have led to data rich environments. In such environments, data can easily be acquired not only from the monitored entities but also from the surroundings where the entity is operating. The additional data that are available from the problem domain, which cannot be used independently for learning models, constitute _explicit_ context. Such context, if taken into account while learning, can potentially improve the performance of predictive models. Typically, the data from various sensors are present in the form of time series. Recurrent Neural Networks (RNNs) are preferred for such data as it can inherently handle temporal context. However, the conventional RNN models such as Elman RNN, Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) in their present form do not provide any mechanism to integrate explicit contexts. In this paper, we propose a Context Integrated RNN (CiRNN) that enables integrating explicit contexts represented in the form of contextual features. In CiRNN, the network weights are influenced by contextual features in such a way that the primary input features which are more relevant to a given context are given more importance. To show the efficacy of CiRNN, we selected an application domain, engine health prognostics, which captures data from various sensors and where contextual information is available. We used the NASA Turbofan Engine Degradation Simulation dataset for estimating Remaining Useful Life (RUL) as it provides contextual information. We compared CiRNN with baseline models as well as the state-of-the-art methods. The experimental results show an improvement of 39% and 87% respectively, over state-of-the art models, when performance is measured with RMSE and score from an asymmetric scoring function. The latter measure is specific to the task of RUL estimation. Context dependent recurrent neural network Context integrated recurrent neural network Contextual Gated Recurrent Unit Context sensitive recurrent neural network Machine prognostics and health management Predictive maintenance Remaining useful life estimation ## 1 Introduction The Internet of Things (IoT) has led to many'smart' innovations such as smart homes, smart cities, smart factories, smart agriculture, to name a few. The primary contributors to IoT enabled smart creations are advanced sensor, communication, and computing technologies. The sensors play a crucial role as IoT devices depend on them to get the data from the monitored entities. The monitored entity can range from a person (for example, to get health parameters), environment (to get air, water, or soil parameters), to a large industrial setup (to get parameters associated to various processes or equipment). However, acquiring and storing data is not sufficient, the data needs to be analysed and interpreted to extract knowledge, which can further be used for decision making and automation at various levels. Machine Learning (ML) offers a way to achieve this through predictive analytics. Particularly, Deep Learning (DL) has gained a lot of attention due to huge amount of data generated by smart environments (homes, cities, industries etc.) [Chen and Lin, 2014], [Najafabadi et al., 2015]. Typically, the data received through sensors from such domains are time series data. Recurrent Neural Networks (RNNs) can be a preferable approach as they inherently capture temporal context available in the time series or sequential data Karim et al. [2019], Gers et al. [2001], Malhotra et al. [2015]. One of the early and widely known RNNs, Elman network [Elman, 1990], introduced the notion of context neurons that function as network's memory and allow implicitly representing the temporal context of the input data. The input layer is augmented with a context layer. The context units are initialized to a specific value. At a particular time step \(t\), the hidden units are activated by both input layer units and context layer units. The hidden units with feed forward connections, in turn, activates the output layer units. The hidden units also activate the context unit through feed-back connections which have unit weight. Thus, in the next time step \(t+1\), the context units have the hidden unit values of previous time step \(t\). The hidden unit patterns is the context that is saved in the context units. The hidden units map both the input and the previous state to desired output. Thus, the internal representations developed in the process are sensitive to the temporal context [Elman, 1990]. In this paper, we treat the notion of context in a different way, and the focus is not on the temporal context or the contexts that are generated within the network from input and/or output signals as we already have various forms of RNNs that elegantly capture them. The notions of context, context-aware, context-dependent, and context-sensitive are widely used across various domains. Several definitions of these terms, particularly for the term context, have appeared in the literature. During the early 90's these concepts gained huge popularity within the research community of pervasive and ubiquitous computing, which evolved from mobile and distributed computing. As defined by Abowd et al.[Abowd et al., 1999], context is any information that can be used to characterise the situation of an entity. An entity is a person, place, or object that is considered relevant to the interaction between a user and an application, including the user and applications themselves. From machine learning perspective, we are concerned with context that is available as data from the problem domain but not necessarily independently used for learning. The context data may influence the decision by improving the model but may not be involved directly in learning. For example, in medical diagnosis, the patient's gender, age, and weight are often available, which can be considered as contextual data. For clarity, we adopt the distinction between _primary_ features and _contextual_ features as described in [Turney, 1996]. Primary features are useful for machine learning task (classification/regression) when considered in isolation, irrespective of other features. Contextual features are not useful in isolation, but can be useful when combined with primary features. For example, for a gas turbine engine diagnosis problem, thrust, temperature, and pressure are primary features, whereas the weather conditions such as ambient temperature and humidity can be considered as contextual features. We also distinguish between _explicit_ and _implicit_ context. The context that can be captured from the environment where primary data is acquired is considered as explicit. The above-mentioned examples of medical diagnosis and gas turbine engine diagnosis have explicit contexts represented by contextual features. On the other hand, implicit context is present in the primary data itself. For example, for image classification, many approaches consider the neighbouring pixels as context while processing a particular pixel. Similarly, in Natural Language Processing (NLP) task, the neighbouring words can be considered as context. Finally, the temporal sequence of the instances in a data set is another example of implicit context. For simplicity, in rest of the paper, we will be referring 'explicit context' as 'context'. In several problem domains, data from operating environment is often captured along with the data from monitored system (or entity), in other words, the contextual information is available. Yet, during learning, the focus is on the internal state data of the monitored entity and the external data from the operating environment is not fully considered or even ignored. As mentioned earlier RNNs (including Long Short Term Memory-LSTM and Gated Recurrent Unit-GRU) can handle the temporal context very well. However, there is no explicit way of leveraging the contextual information. Usually, the feature space comprising of primary features is expanded with contextual features, and the models treat them in the same manner as primary features. In this paper, we propose an approach that integrates explicit contexts to RNN and enhances its potentials. We refer to the the resulting RNN as Context Integrated RNN (CiRNN) which takes into account implicit temporal context as well as explicit context [Dutta Baruah and Munoz Orgnaero, 2023, accepted]. In the proposed approach, the contextual features are used to weight the primary features such that features that are more relevant in a given context are assigned more importance. This is also termed as _contextual weighting_[Turney, 1996]. This work draws inspiration from the work of Ciskowski and Rafajlowicz [Ciskowski and Rafajlowicz, 2004] where they introduced Context-Dependent Feed-Forward Neural Networks. We demonstrate the efficacy of CiRNN by applying it to the domain of predictive maintenance where contextual information is available through various sensors. The rest of the paper is organised as follows. In the next section we briefly discuss methods where context is added to RNN in various ways. In section 3, we discuss the architecture and learning of the proposed context integrated RNN with GRU. Section 4, discusses the experiments and results achieved. Finally, section 5 concludes the paper. ## 2 Related Work In the past decade, many researchers have considered integrating context to RNNs. These methods largely appear in the literature of NLP and recommender systems. In this section, we briefly discuss various RNN architectures that leverage context information for better performance in a given task. Mikolov and Zweig (2012) proposed a context dependent recurrent neural network language model (CDRNNLM) that extends the basic recurrent neural network language model (RNNLM) by introducing context. Context is a real valued vector that is computed based on the sentence history using Latent Dirichlet Allocation (LDA). The architecture introduces a feature layer in addition to the input, output, and hidden layer. The feature layer with associated weights is connected to both hidden and output layers. It provides the context i.e. complementary information to the input word vector, for example, features representing topic information. The conventional approach is to partition the data and to develop multiple topic specific models. However, through CDRNNLM a topic-conditioned RNNLM is attained that avoids the data partitioning. They demonstrated that CDRNNLM improves the performance over state-of-the art methods using benchmark dataset. In (Wang and Cho, 2016), a method is proposed that incorporates corpus-level discourse information into language modelling and the model is referred to as larger-context language model. The effect of context is modeled by a conditional probability where the probability of the next words in a given sentence is determined using previous words in the same sentence, and a context sentence that consists of one or more sentences of arbitrary length. To realize this, a conditional LSTM is proposed which is implemented in two ways, early fusion and late fusion. In early fusion, the context vector representation of preceding sentences is simply considered as an input at every time step. Late fusion tries to maintain separately the dependencies within the sentence being modelled (intra-sentence dependencies) and those between the preceding sentences and the current sentence (inter-sentence dependencies). The existing LSTM memory cell \(c_{t}\) models the intra-sentence dependency. The inter-sentence dependency is modeled through the interaction between \(c_{t}\) and the context vector. First, a controlled context vector \(r_{t}\) is determined using \(c_{t}\) and the context vector. The vector \(r_{t}\) represents the amount of influence of context or preceding sentences. Finally, the controlled context vector is used to compute the output of the LSTM layer. The experimental results showed that later fusion outperformed early fusion. A Language Model that uses a modified version of the standard bidirectional LSTM called contextual bidirectional LSTM (cBLSTM) is proposed in (Mousa and Schuller, 2017) for sentiment analysis. cBLSTM predicts a word given the full left and right context while excluding the predicted word itself from the conditional dependence. This is achieved by introducing a forward and a backward sub-layer. The encoded input consists of sentence start symbol and all the words before the sentence end symbol. It is given to the forward sub-layer and the forward hidden states predict the words between start and the end symbol. The backward sub-layer does the reverse operation. When the two sub-layers are interleaved it results in prediction of any target word using the full left and right contexts. The obtained results on benchmark dataset show that cBLSTM language model outperforms both LSTM and BLSTM based models. In (Smirnova and Vasile, 2017), a family of Contextual RNNs (CRNNs) is presented that leverages contextual information for item sequence modeling for next item prediction in a recommender system. Apart from order of items, the model considers other available information about user item interaction such as type of the interaction, the time gaps between events and time of interaction as context. The two proposed CRNN architectures are compared with non-contextual models and the results show that CRNNs significantly outperforms all of the other methods on two e-commerce datasets. Tran et al. (Tran et al., 2018) proposed a Context-dependent Additive Recurrent Neural Network (CARNN) for the problem of sequence mapping in NLP. The learning process considers the primary source of information as the given text, for example, the history of utterance in a dialog system or the sequence of words in language modeling. The previous utterance in dialog system or the discourse information in language modeling is taken as context. The CARNN units consist of two gates (update gate and reset gate) that regulate the information from the input. The gates are functions that depend on global context, word embedding (input), and previous hidden vector. In (Wenke and Fleming, 2019), a RNN architecture named as Contextual RNN is discussed where contextual information is used to parameterize the initial hidden state. Typically, the hidden state of RNN is initialised to a default constant, most commonly to zero. In this paper, the initial hidden state is conditioned on contextual information. The contextual information can be acquired from the input sequence or from a coarse image with multiple objects related to the task. The authors show that Contextual RNN initialised with contextual information from input sequence improves the performance for an associative retrieval task compared to the baseline with zero initialisation. Next, we consider recent literature in various other domains where sensory data is used as a context for RNN models. In [1], a LSTM model is used to predict appliances energy. Energy usage of home appliances is highly dependent on weather conditions. In this work, two different datasets are combined with the aim of introducing contextual features to the LSTM model. The contextual features are obtained from the first dataset that consists of house temperature and humidity measures for a period of 4.5 months. The second dataset provides the measurements of individual household electric power consumption over a period of 47 months. The work focused on identifying the best predictors for the prediction task and did not attempt to adapt the model for incorporating contextual information. Lee and Hauskrecht [11], proposed a LSTM-based model for predicting event occurrence in clinical event time series with events such as medication orders, lab tests and their results, or physiological signals. The proposed model considers the usual abstracted information from the past through the hidden states, and also recent event occurrence information. The recent event at the current time step is represented as a multi-hot vector which is linearly transformed and added to the LSTM model. The output from the model depends on both the hidden state and the current event information. The authors demonstrated through experimental results with MIMIC-III clinical event data Johnson et al. [10] that by combining recent event information and the summarised past events through LSTM hidden states improves the prediction over models that do not incorporate the former information. In [1], the task of long-term traffic flow (traffic flow over long interval) prediction is performed using a RNN that considers inputs from multiple data sources. The RNN uses, flow information, temporal information, and contextual information. The type of the day represents the contextual feature. For example, weekend or regular and event or no-event day (celebration days). Binary values are used for representation, 0 is used for weekend and event day and 1 is used for regular and non-event days. In this work also no emphasis is given on contextual learning. It simply extends the input feature vector to accommodate the contextual features. Ma et al. [10] also proposed a contextual convolutional RNN for long-term (daily) traffic flow forecasting. The traffic flow data is represented in a form such that it is similar to a sequence of square images which is given as input to the CNN. The CNN is used to extract the inter-and intra-day traffic pattern. The output of CNN is given as input to a LSTM network. Here, four contextual features are used, day of the week, season, weather, and holiday. The contextual features are represented as one-hot encoding. Finally, contextual features and the output from the LSTM are concatenated and given as input to a linear dense layer for the prediction of traffic flow. The performance of the model was evaluated using real data of Seattle city. The results show that the proposed model provides better prediction particularly during high demand periods. In [12], a Feature injected RNNs (Fi-RNN) is presented to predict short-term traffic speed. The model consists of an RNN, an autoencoder, and a feedforward neural network. The historical traffic flow data is given as input to the RNN (LSTM) to learn the sequential pattern in terms of sequential features. The contextual data is used by a sparse autoencoder to obtain an expanded set of contextual features. The expanded contextual features are restricted to [0,1] using a softmax function and this value is used to weight the features from LSTM (the hidden states of LSTM). Finally, these weighted feature vectors are used as input to the feed-forward neural network to learn traffic patterns and predict the speed. The experimental results from two real datasets show that Fi-RNN that incorporates contextual features achieves improved accuracy. To summarize, all the RNN models that are discussed in this section obtained improved results by adding additional contextual information to the model. In these approaches, the context information has been added primarily in three ways to the RNN model. Let us denote the primary feature vector at time \(t\) as \(\mathbf{x}_{t}\in\mathbb{R}^{(n_{x}\times 1)}\) and contextual feature vector as \(\mathbf{z}_{t}\in\mathbb{R}^{(n_{x}\times 1)}\). Firstly, most of the approaches, integrate context with the primary input to get an extended input feature vector. The extended vector can be represented as \(\mathbf{x}\ ^{\prime}=[\mathbf{x}_{t},\mathbf{z}_{t}]\) where \(\mathbf{x}\ ^{\prime}_{t}\in\mathbb{R}^{(n_{x}+n_{x})\times 1}\). Sometimes, the new input vector is obtained multiplicatively as \(\mathbf{x}\ ^{\prime}=[\mathbf{x}_{t}\odot\mathbf{z}_{t}]\). Note that in this case, either the context feature or both primary and context features are transformed such that they are of the same dimension. In the second manner, context is introduced additively and provided as a separate input to the hidden or output layer or both. The hidden state is computed as \(f(\mathbf{W}\mathbf{x}_{t}+\mathbf{U}\mathbf{h}_{t-1}+\mathbf{V}\mathbf{z}_{t})\) and when context is added to the output, it is computed as \(g(\mathbf{U}\mathbf{h}_{t}+\mathbf{V}\mathbf{z}_{t})\). Here, \(\mathbf{W},\mathbf{U},\mathbf{V}\) are the associated weights and \(f,g\) are the activation functions. Finally, the context is integrated multiplicatively in the hidden unit as as \(f(\mathbf{W}\mathbf{x}_{t}\odot\mathbf{V}\mathbf{z}_{t}+\mathbf{U}\mathbf{h}_ {t-1}\odot\mathbf{V}\mathbf{z}_{t})\) and the output as \(g(\mathbf{U}\mathbf{h}_{t}\odot\mathbf{V}\mathbf{z}_{t})\). As mentioned briefly in section 1, in contrast to the above approaches, in this paper we integrate context such that the weights associated with the primary input vector is dependent on the context. Accordingly, the hidden state is given as, \(f(\mathbf{W}(\mathbf{z}_{t})\mathbf{x}_{t}+\mathbf{U}\mathbf{h}_{t-1})\) and the output unit computation remains same as RNN. This will be discussed in the next section in more detail. ## 3 Proposed Approach In this section, we describe the proposed Context integrated Recurrent Neural Network (CiRNN) model which uses GRUs as the basic unit of the network. We first give the notations, and then briefly discuss the GRU to relate it to CIRNN. Finally, we present the details of CIRNN. ### Notations The following notations are used to describe iRNN. \[n_{x}:\text{primary input dimension} \tag{1}\] \[n_{h}:\text{number of hidden units}\] \[n_{y}:\text{output dimension}\] \[n_{z}:\text{context input dimension}\] \[T_{x}:\text{input sequence length}\] \[T_{y}:\text{output sequence length}\] (for many-to-one RNN architecture \[T_{y}=1\] ) \[\mathbf{x}_{t}\in\Re^{(n_{x}\times 1)}:\text{input vector at time step }t\] \[\mathbf{z}_{t}\in\Re^{(n_{x}\times 1)}:\text{context vector at time step }t\] \[\mathbf{y}_{t}\in\Re^{(n_{y}\times 1)}:\text{output at time step }t\] \[\mathbf{\hat{y}}_{t}\in\Re^{(n_{y}\times 1)}:\text{estimated output at time step }t\] \[\mathbf{h}_{t}\in\Re^{(n_{h}\times 1)}:\text{hidden unit activation at time step }t\] \[\tilde{\mathbf{h}}_{t}\in\Re^{(n_{h}\times 1)}:\text{candidate activation at time step }t\] \[\mathbf{s}_{t}\in\Re^{(n_{h}\times 1)}:\text{set/update gate at time step }t\] \[\mathbf{r}_{t}\in\Re^{(n_{h}\times 1)}:\text{reset gate at time step }t\] \[\mathbf{W}^{s}\in\Re^{(n_{h}\times n_{x})}:\text{weights (set gate to input)}\] \[\mathbf{W}^{h}\in\Re^{(n_{h}\times n_{x})}:\text{weights (hidden to input)}\] \[\mathbf{W}^{r}\in\Re^{(n_{h}\times n_{x})}:\text{weights (reset gate to input)}\] \[\mathbf{U}^{s}\in\Re^{(n_{h}\times n_{h})}:\text{weights (set gate to hidden)}\] \[\mathbf{U}^{h}\in\Re^{(n_{h}\times n_{h})}:\text{weights (hidden to hidden)}\] \[\mathbf{U}^{r}\in\Re^{(n_{h}\times n_{h})}:\text{weights (reset gate to hidden)}\] \[\mathbf{V}\in\Re^{(n_{y}\times n_{h})}:\text{weights (output to hidden)}\] \[\mathbf{b}_{y}\in\Re^{(n_{y}\times 1)}:\text{bias of output unit}\] ### Gated Recurrent Units A GRU (Cho et al., 2014) is a unit or cell, which is used as a hidden unit in RNNs. It can adaptively remember and forget the information in the network during the learning phase. The primary motivation of GRU is same as LSTM, which is to model long-term sequences while avoiding the vanishing gradient problem of RNN. However, compared to LSTM, GRU does this with less number of parameters and less number of operations. GRU comprises of two gates, reset and update gate that controls the flow of information. For time step t, the GRU computes the output \(\hat{\mathbf{y}}_{t}\) using input \(\mathbf{x}_{t}\) and the hidden state as follows: \[\begin{split}\mathbf{\hat{y}}_{t}&=f(\mathbf{V} \mathbf{h}_{t}+\mathbf{b}_{y})\\ \mathbf{h}_{t}&=\mathbf{s}_{t}\odot\mathbf{h}_{t-1} +(1-\mathbf{s}_{t})\odot\tilde{\mathbf{h}}_{t}\\ \tilde{\mathbf{h}}_{t}&=tanh(\mathbf{W}^{h}\mathbf{x }_{t}+\mathbf{U}^{h}(\mathbf{r}_{t}\odot\mathbf{h}_{t-1}))\\ \mathbf{s}_{t}&=\sigma(\mathbf{W}^{s}\mathbf{x}_{t}+ \mathbf{U}^{s}\mathbf{h}_{t-1})\\ \mathbf{r}_{t}&=\sigma(\mathbf{W}^{r}\mathbf{x}_{t}+ \mathbf{U}^{r}\mathbf{h}_{t-1})\end{split} \tag{2}\] The reset gate functions very similar to the forget gate of the LSTM cell. It is a linear combination of input at time step \(t\) and hidden state at previous time step \(t-1\). The reset gate resets the hidden state with current input and ignores the information from previous hidden state when its value is close to 0. It controls how much information from the previous hidden state will be removed. On the other hand, the update gate controls how much information from the previous hidden state will be remembered. ### Context Integrated RNN Context Integrated RNN is a RNN with GRU as hidden units. For clarity of presentation, in Figure 1, the basic architecture of a RNN is compared with CiRNN. The main difference between the two architectures is that in CiRNN, the input to hidden unit weights are dependent on the context variables. When each of the hidden units \(\mathbf{h}_{1},\mathbf{h}_{2},\ldots\mathbf{h}_{nh}\) in Figure 1 is a GRU then the parameters \(\mathbf{W}^{s},\mathbf{W}^{r},\mathbf{W}^{h}\) are dependent on the context variables \(\mathbf{z}\). The output computation in a single unit of CiRNN is same as in GRU. However, the computation of candidate hidden state, set/update gate, and reset gate values are computed differently as it involves weights that are dependent on context \(\mathbf{z}_{t}\) as shown below: \[\begin{split}\mathbf{\check{y}}_{t}&=f(\mathbf{V} \mathbf{h}_{t}+\mathbf{b}_{y})\\ \mathbf{h}_{t}&=\mathbf{s}_{t}\odot\mathbf{h}_{t-1} +(1-\mathbf{s}_{t})\odot\tilde{\mathbf{h}}_{t}\\ \tilde{\mathbf{h}}_{t}&=tanh(\mathbf{W}^{h}(\mathbf{ z}_{t})\mathbf{x}_{t}+\mathbf{U}^{h}(\mathbf{r}_{t}\odot\mathbf{h}_{t-1}))\\ \mathbf{s}_{t}&=\sigma(\mathbf{W}^{s}(\mathbf{z}_{t })\mathbf{x}_{t}+\mathbf{U}^{s}\mathbf{h}_{t-1})\\ \mathbf{r}_{t}&=\sigma(\mathbf{W}^{r}(\mathbf{z}_{t })\mathbf{x}_{t}+\mathbf{U}^{r}\mathbf{h}_{t-1})\end{split} \tag{3}\] In equation 3, the weights associated with the input \((\mathbf{x}_{t})\) are dependent on the vector of contextual variables \((\mathbf{z}_{t})\). Let us consider one of the such parameters, \(\mathbf{W}^{s}\). Note that \(\mathbf{W}^{h}(\mathbf{z}_{t})\) and \(\mathbf{W}^{r}(\mathbf{z}_{t})\) can be expressed in similar way. The matrix \(\mathbf{W}^{s}(\mathbf{z}_{t})\) is of dimension \(n_{h}\times n_{x}\) and each of the components can be given as: \[\mathbf{W}^{s}(\mathbf{z}_{t})=\begin{bmatrix}w^{s}_{11}(\mathbf{z}_{t})&w^{s }_{12}(\mathbf{z}_{t})&\cdots&w^{s}_{1n_{x}}(\mathbf{z}_{t})\\ w^{s}_{21}(\mathbf{z}_{t})&w^{s}_{22}(\mathbf{z}_{t})&\cdots&w^{s}_{2n_{x}}( \mathbf{z}_{t})\\ \vdots&\vdots&\cdots&\vdots\\ w^{s}_{n_{h}1}(\mathbf{z}_{t})&w^{s}_{n_{h}2}(\mathbf{z}_{t})&\cdots&w^{s}_{n_{h }n_{x}}(\mathbf{z}_{t})\end{bmatrix} \tag{4}\] where each element of the matrix can be expressed as: \[\begin{split} w^{s}_{ki}&=\mathbf{A}^{s}_{ki}\mathbf{G}( \mathbf{z}_{t}),\quad k=1,..,n_{h},\quad i=1,..,n_{x}\\ \mathbf{A}^{s}_{ki}&=[a^{s}_{ki1},a^{s}_{ki2},\cdots,a^{s}_{ kim}]\end{split} \tag{5}\] In (5), \(\mathbf{G}(\mathbf{z}_{t})=[g_{1}(\mathbf{z}_{t}),g_{2}(\mathbf{z}_{t}),...,g_ {m}(\mathbf{z}_{t})]^{T}\) is a vector of basis functions that can be chosen at the time of design and \(\mathbf{A}^{s}_{ki}\) is a vector of coefficients that specify the dependence of weights on context variables [Ciskowski and Rafajlowicz, Figure 1: Basic architecture of RNN vs. CiRNN. The connection weights between various layers is represented by \(\mathbf{W}\). In CiRNN, the input to hidden layer connection weights (\(\mathbf{W}_{hx}\)) are dependent on the context \(\mathbf{z}\)[Dutta Baruah and Munoz Organero, 2023, accepted] 2004]. We can define a matrix \(\mathbf{A}^{s}\) where each row \(\mathbf{A}^{s}_{k}\) can be formed by concatenating coefficient vectors \(\mathbf{A}^{s}_{ki}\) as shown below. Therefore, \(\mathbf{A}^{s}\) is of dimension \((n_{h}\times n_{x}m)\). \[\mathbf{A}^{s}_{k}=[\mathbf{A}^{s}_{k1},\mathbf{A}^{s}_{k2},\cdots,\mathbf{A}^{ s}_{kn_{s}}],\quad k=1,2,...,n_{h} \tag{6}\] Using \(\mathbf{A}^{s}\) and similarly \(\mathbf{A}^{h}\) and \(\mathbf{A}^{r}\), the candidate hidden state \(\tilde{\mathbf{h}}_{t}\), the update (set) gate \(\mathbf{s}_{t}\), and reset gate \(\mathbf{r}_{t}\) in equation (3) can be expressed as: \[\tilde{\mathbf{h}}_{t}=tanh[\mathbf{A}^{h}(\mathbf{x}_{t}\otimes \mathbf{G}(\mathbf{z}_{t}))+\mathbf{U}^{h}(\mathbf{r}_{t}\odot\mathbf{h}_{t-1 })] \tag{7}\] \[\mathbf{s}_{t}=\sigma[\mathbf{A}^{s}(\mathbf{x}_{t}\otimes \mathbf{G}(\mathbf{z}_{t}))+\mathbf{U}^{s}\mathbf{h}_{t-1}]\] \[\mathbf{r}_{t}=\sigma[\mathbf{A}^{r}(\mathbf{x}_{t}\otimes \mathbf{G}(\mathbf{z}_{t}))+\mathbf{U}^{r}\mathbf{h}_{t-1}]\] In (7), \(\otimes\) denotes Kronecker product of matrices. Learning of each of the weights \(w^{s}_{ki}\) will require learning of the vector of coefficients \(\mathbf{A}^{s}_{ki}\) with \(m\) elements. The parameter learning process in CiRNN is similar to RNNs which requires defining a loss function and optimization of parameters using any suitable optimization algorithm such as stochastic gradient descent (SGD) or Adam optimization algorithm [Kingma and Ba, 2014] with Back Propagation Through Time (BPTT or Truncated BPTT). (The gradient calculation of L2 Loss function is provided in the Appendix A.) ## 4 Experiments and Results In this section, we first describe the evaluation task and the dataset, and then discuss the experiments and the results achieved with the proposed model. The results of CiRNN are compared with baseline models and also with the state-of-art methods from the existing literature. ### Predictive Maintenance For evaluating the efficacy of CiRNN, we considered the task of predictive maintenance. Predictive maintenance (PdM) also referred to as Prognostic and Health Management (PHM) has become more relevant with the arrival of Industry 4.0 within the context of smart manufacturing and industrial big data. Through PdM, various industries aim to achieve near-zero failures, downtime, accidents, and unscheduled maintenance thereby saving considerable amount of cost and also improve operational safety [Zhang et al., 2019]. PdM primarily comprises of diagnostics, and prognostics mechanisms. The former deals with fault detection, isolation and identification and the latter attends to degradation monitoring and failure prediction. Compared to diagnostics, prognostics is more efficient to achieve zero-downtime performance [Shin and Jun, 2015]. Prediction of Remaining Useful Life (RUL) is one of the crucial tasks in prognostics. RUL, also called remaining service life, residual life or remnant life, refers to the time left before observing a failure given the current machine age and condition, and the past operation profile [Jardine et al., 2006]. A reliable estimation of RUL enables the stakeholders to attain the benefits of PdM. In recent years, industrial cyber-physical system has emerged as one of the key technologies for reliable data acquisition in real-time. Thanks to such systems that made huge amount of data available which in turn led to a growing interest among researchers to apply machine learning, in particular deep learning, approaches to industrial applications. A review of deep neural networks applied to PHM is available in [Rezaeianjouybari and Shang, 2020] and [Zhang et al., 2019]. In [Wang et al., 2020] the state-of-the-art deep learning approaches for RUL prediction is discussed. Here, we briefly provide an overview of some of the recent approaches for prediction of RUL that employ C-MAPSS dataset as our work is evaluated with the same dataset. More details on the dataset is provided in the next section. In [Heimes, 2008], one of the early works that used a basic RNN for estimation of RUL of turbofan engines is presented. It is based on a RNN trained with back-propagation through time using Extended Kalman Filter. This work was placed second in the IEEE 2008 Prognostics and Health Management conference challenge problem [Jia et al., 2018]. In [Yuan et al., 2016] and [Wu et al., 2018], a vanilla LSTM is used for prediction of RUL. They showed that LSTM could perform better compared to basic RNNs, GRUs and Adaboost-LSTM. In [Zheng et al., 2017], deep LSTM with a feedforward new network is used to estimate the RUL. The six operating conditions of the engine are encoded as one hot vector and are used while training the model. The results showed that deep LSTM model with feedforward network performed better compared to Multi-layer Perceptron, Support Vector Regression, Relevance Vector Regression, and Deep Convolutional Neural Network (CNN). Li et al. [Li et al., 2018] applied 1D convolutions in sequence without pooling operations for useful feature extraction in RUL prediction. The model achieved good results without incurring high training time compared to recurrent models. Listou Ellefsen et al., (Listou Ellefsen et al., 2019), used a Restricted Boltzmann Machine (RBM) to investigate the effect of unsupervised pre-training in RUL predictions. Initially, RBM is used to get the initial weights from unlabeled data and later fine tuning is performed in an LSTM network with labeled data. Further, a Genetic Algorithm (GA) approach is applied for tuning the hyper-parameters during the training. The model proposed in (Zhao et al., 2019), emphasizes on trend features. The trend features are extracted using Complete Ensemble Empirical Mode Decomposition (CEEMD) approach, followed by reconstruction procedure. These features are used to train a LSTM for RUL prediction. A global attention mechanism is used with LSTM network in (da Costa et al., 2019). It learns the input to RUL mapping directly from the raw sensor data and do not require feature engineering or unsupervised pre-training. Further, the attention mechanism allows visualising the learned attention weights at each step of prediction of RUL. The existing approaches discussed here consider the C-MAPSS dataset. The dataset consists of three operational setting parameters that provide information about different flight operating conditions which is contextual information. The existing approaches do not explicitly utilize this information during the learning process. They either excluded this information and used only the sensor data during training (Zhao et al., 2019) or included them as primary features (Yuan et al., 2016), (da Costa et al., 2019). ### Dataset Description NASA Turbofan Engine Degradation Simulation Data Set (Saxena et al., 2008) is a widely used benchmark for RUL prediction. The dataset is generated using Commercial Modular Aero-Propulsion System Simulation (C-MAPSS) tool. It consists of four distinct datasets (FD001, FD002, FD003, and FD004) that contain information from 21 sensors (such as Total temperature at fan inlet, Total temperature at Low Pressure Compressor outlet), 3 operational settings (flight altitude, Mach number, and throttle resolver angle), engine identification number, and cycles of each engine. The degradation in engine performance is captured by the sensor data. The engine performance is also substantially influenced by the three operational settings. FD001 has one operating condition and one failure mode. FD002 has six operating conditions and one failure mode. FD003 has one operating condition and two failure modes and FD004 has six operating conditions and two failure modes. The details are provided in Table 1. For our approach, both FD002 and FD004 are suitable as the six operating conditions provide the contextual information. However, for experiments we have also considered FD001 and FD003 to evaluate the performance where only one context is available (single operating condition). The engines start with a different degree of initial wear and manufacturing variation which is not known to the user and are not considered as a fault condition. Each of the datasets consist of separate training and test sets. In the training set, the sensor data is captured until the system fails whereas in the test set it is captured up to a certain time prior to the failure. The test sets also have the true Remaining Useful Life (RUL) values. The objective is to predict the number of remaining operational cycles before failure as provided in the test set. ### Data Preprocessing For each of the dataset, univariate and bivariate analysis of data from the 21 sensors is performed. Each of the sensor data series is analyzed to see the trend with respect to the time cycles. Further, for each dataset, sensors are selected based on the scatter plot observations and correlation analysis. The selected sensors and operational settings is provided in Table 2. The sensor values are used as primary features and the operational settings are used as contextual features while learning the models. The RUL for training is required to be identified as the true RUL values are not provided in the training data. We adopted a piece-wise linear degradation model to get the RUL values for the training data as given in (Listou Ellefsen et al., 2019) and (da Costa et al., 2019). The degradation model assumes that after an initial period with constant RUL values (initially when the engine operates in normal conditions) (Heimes, 2008) the RUL decrease linearly as there is \begin{table} \begin{tabular}{l l l l l} \hline \hline **Specification** & **FD001** & **FD002** & **FD003** & **FD004** \\ \hline Engine units (Training) & 100 & 260 & 100 & 249 \\ Engine units (Testing) & 100 & 259 & 100 & 248 \\ Operating Conditions & 1 & 6 & 1 & 6 \\ Fault Modes & 1 & 1 & 2 & 2 \\ \hline \hline \end{tabular} \end{table} Table 1: Specification of C-MAPSS datsets progress in the number of time cycles. For our experiments, the initial constant RUL is considered to be 125 cycles so that results can be compared with existing works. Figure 2 shows the RUL for engine unit 2 of FD002 training data. For FD001 and FD003 (with only one operational condition), we performed min-max normalisation on both primary and contextual inputs, and also on the target. For FD002 and FD004 (with six operational conditions) we performed contextual normalisation [Turney, 1996] in addition to min-max normalisation (excluding the target). For contextual normalisation, the data is clustered according to 6 operational regimes using k-means clustering and then the data is normalised using the cluster statistics (mean and range). Finally, data smoothing is performed using moving averages with window size of 3. ### Performance Metrics Two metrics are used to measure the performance of the proposed model. The first metric is the RMSE, and the second metric is a score that is computed from an asymmetric scoring function as proposed by Saxena et al. [Saxena et al., 2008], which is given as, \[s=\begin{cases}\sum_{i=1}^{n}e^{-\frac{D_{i}}{a_{1}}}-1,\text{ if }D_{i}<0\\ \sum_{i=1}^{n}e^{\frac{D_{i}}{a_{2}}}-1,\text{ if }D_{i}\geq 0\end{cases} \tag{8}\] where \(a_{1}=10\), \(a_{2}=13\), and \(D_{i}=R\hat{U}L_{i}-RUL_{i}\) is the difference between predicted RUL and actual RUL values, \(n\) is the number of samples in the test data. The scoring metric penalizes the late predictions (positive errors) more compared to early predictions (negative errors). In either case, the penalty increases exponentially with error. \begin{table} \begin{tabular}{l l l} \hline \hline **Dataset** & **Sensors** & **Operational Settings** \\ \hline FD001 & s2, s3, s4, s7, s8, s9, s11, s12, s13, & OS1, OS2 \\ & s15, s17, s20, s21 & \\ FD002 & s1, s2, s8, s13, s14, s19 & OS1, OS2, OS3 \\ FD003 & s2, s3, s4, s7, s8, s9, s11, s15, s17, & OS1,OS2 \\ & s20, s21 1 & \\ FD004 & s2, s8, s14, s16 & OS1, OS2, OS3 \\ \hline \hline \end{tabular} \end{table} Table 2: Selected sensors and operational settings Figure 2: RUL of engine unit 1 (FD002 training data) ### Training and Validation To train the models, the training and validation data is prepared in the following way. From each of the dataset, each engine unit is considered and last \(k\) samples are selected for validation and remaining are used for training. It is to be noted that \(k\) is required to be multiple of sequence length used while training the models. As a result, the validation set consists of samples from each engine unit as in the test set. Table 3 shows various hyperparameters that are used for tuning the models. The number of layers is fixed to one, due to computational overhead of custom built CiRNN code. For the contextual inputs, polynomial basis functions of degree 2 are used. We considered GRU-RNN as a baseline where the RNN consists of GRU but without contextual weighting. Both baseline and CiRRN are trained with various configurations. For example, GRU-RNN with contextual normalisation and with context features where context features are treated as primary features. The later is also referred to as contextual expansion (Turney, 1996). Table 4 shows various models and Table 5 presents the best hyperparameter values for each model that are achieved after tuning them. In the next subsection, we discuss the results obtained with these models. ### Results Table 6 summarises the results obtained from CiRNN and the baseline models with the test dataset. The testing is performed for each engine unit separately and the average RMSE and average score with standard deviation is reported. For FD001 dataset, which has only one operational condition, RNN with GRU model where context features are added and treated as primary features (GRU_D1_CxF) performed better than other two models. The percentage increase in \begin{table} \begin{tabular}{l l l l} \hline \hline Dataset & Model & Model Acronym & Contextual Normalisation & Context Features \\ \hline FD001 & CiRNN & CiRNN\_D1 & No & Yes \\ (1 operational condition) & GRU & GRU\_D1\_CxF & No & Yes \\ & GRU & GRU\_D1 & No & No \\ \hline FD002 & CiRNN & CiRNN\_D2 & Yes & Yes \\ (6 operational conditions) & CiRNN & CiRNN\_D2\_CxF & No & Yes \\ & GRU & GRU\_D2 & No & No \\ & GRU & GRU\_D2\_CxF & No & Yes \\ & GRU & GRU\_D2\_CxN & Yes & No \\ & GRU & GRU\_D2\_CxN\_CxF & Yes & Yes \\ \hline FD003 & CiRNN & CiRNN\_D3 & No & Yes \\ (1 operational condition) & GRU & GRU\_D3\_CxF & No & Yes \\ & GRU & GRU\_D3 & No & No \\ \hline FD004 & CiRNN & CiRNN\_D4 & Yes & Yes \\ (6 operational conditions) & CiRNN & CiRNN\_D4\_CxF & No & Yes \\ & GRU & GRU\_D4 & No & No \\ & GRU & GRU\_D4\_CxF & No & Yes \\ & GRU & GRU\_D4\_CxN & Yes & No \\ & GRU & GRU\_D4\_CxN\_CxF & Yes & Yes \\ \hline \hline \end{tabular} \end{table} Table 4: Model configurations \begin{table} \begin{tabular}{l l} \hline \hline **Hyperparameter** & **Value** \\ \hline Number of hidden units & \{10, 15, 20, 25, 30 \} \\ Sequence length & \{10, 15, 20\} \\ Batch size & \{64, 128, 256 \} \\ Learning rate & loguniform(1e-5, 1e-3) \\ \hline Optimizer & SGD, Adam, RMSProp \\ \hline \hline \end{tabular} \end{table} Table 3: Hyperparameter values and optimizer used for tuning RMSE for CiRNN_D1 in comparison to GRU_D1_CxF is around 7%. For all the remaining datasets (FD002, FD003, FD004) CiRNN performed better or at par with the baseline models. It can be observed that contextual normalisation along with contextual weighting through CiRNN, particularly in datasets FD002 and FD004, have noticeably improved the performance. Further, it is to be noted that the distribution of the score metric is skewed. Here, we have provided distribution of RMSE and scores for FD002 and FD004 with CiRNN in Figure 3. It is apparent from the figures that the count of engines having high scores (more than 1000) is very low. Figure 4 shows the predicted RUL values versus actual RUL values for the first 1200 samples of the FD004 test data and in Figure 5 the predicted and actual RUL values of two randomly selected engine units from FD001 and FD004 test data is shown. It can be noticed from Figure 4 that for longer running times, the model prediction is better compared to shorter running times in FD004. The late predictions in case of shorter running times also contribute to higher score from scoring function. If we observe the predictions of engine units closely (Figure 5), late predictions are more in FD001 compared to FD004. This is also observed in Table 6, where CiRNN predictions have much higher scores in datasets with one operating condition due to late predictions. A comparison of results achieved from CiRNN and results from state-of-the art deep learning models applied to the same dataset is presented in Table 7 (FD001 and FD003) and Table 8 (FD002 and FD004). The models that are selected for comparison are sequential models based on LSTM (Zheng et al., 2017), (Listou Ellefsen et al., 2019), (da Costa et al., 2019), (Zhao et al., 2019) and additionally CNN (Li et al., 2018) is considered. It is worth mentioning here that there are several works reported in the literature that use C-MAPSS dataset, however, most of the works are based on only FD001 dataset and are not tested on all the four datasets. As it can be seen from the Table 8, CiRNN performed better in both the datasets FD002 and FD004 where contextual information in terms of operational settings is available. Our model resulted in 32% (FD002) and 39% (FD004) relative improvement over the best reported RMSE values (LSTM+Attention). The proposed model also achieved much lower scores for both FD002 and FD004 datasets with an improvement of 82% and 87% respectively over the previous best results. Thus, CiRNN is able to reduce the number of late RUL predictions. For FD003 dataset (Table 8), the RMSE is at par with other models, however, the score is comparable to only one model (LSTM+FNN) and much higher than the remaining models. In FD001, the CiRNN model performed better than two models, LSTM+FNN and CEEMD+LSTM in terms of RMSE. However, there is a 13% decrease in performance than the best model (RBM+LSTM). Further, CiRNN score value is higher in comparison to other models. From the experimental results, it is apparent that CiRNN models' performance is significantly higher than other models in presence of contexts and comparable to other models when context is not explicitly available (data with one operating condition). Also, in comparison to other models CiRNN achieved the given performance with relatively simpler model with 1 layer and 15 hidden neurons. Feature selection and contextual normalisation also have an influence on the performance of CiRNN. Further, each of the approaches consider the operating conditions (contextual features) in a \begin{table} \begin{tabular}{l l l} \hline Model Acronym & Hyperparameter values & Optimizer \\ \hline CiRNN\_D1 & \(15,20,64,1\times 10^{-2}\) & Adam \\ GRU\_D1\_CxF & \(25,20,128,5\times 10^{-3}\) & Adam \\ GRU\_D1 & \(10,15,64,9\times 10^{-3}\) & Adam \\ \hline CiRNN\_D2 & \(20,15,64,5\times 10^{-3}\), & RMSProp \\ CiRNN\_D2\_CxF & \(15,20,128,5\times 10^{-3}\) & RMSProp \\ GRU\_D2 & \(25,15,128,1\times 10^{-2}\) & Adam \\ GRU\_D2\_CxF & \(30,15,128,5\times 10^{-3}\) & Adam \\ GRU\_D2\_CxN & \(20,10,64,8\times 10^{-3}\) & RMSProp \\ GRU\_D2\_CxN\_CxF & \(15,10,64,9\times 10^{-3}\) & RMSProp \\ \hline CiRNN\_D3 & \(30,10,64,8\times 10^{-3}\) & RMSProp \\ GRU\_D3\_CxF & \(15,10,64,2\times 10^{-3}\) & RMSProp \\ GRU\_D3 & \(20,10,64,5\times 10^{-3}\) & RMSProp \\ \hline \multicolumn{3}{l}{Hyperparameter values: hidden units, sequence length, batch size, learning rate} \\ \end{tabular} \end{table} Table 5: Model configurations and Hyperparameters different way. For example, Zheng et al. (Zheng et al., 2017), in their approach, clustered the operating conditions and use one-hot encoding for their representation. It is then include as a primary feature. Li et al. (Li et al., 2018) used only the sensor data for modeling. In (Listou Ellefsen et al., 2019) and (da Costa et al., 2019), all the three operational settings and all the sensor measurements are used. However, compared to these approaches, contextual weighting in CiRNN performs better in all the datasets with explicit contexts. Finally, in Figure 6, the contextual weights \(\mathbf{A}^{s}\) are shown for CiRNN with FD002 dataset (CiRNN_FD002). Note that this model has 15 hidden units, 6 primary inputs, and 3 contextual inputs. The primary inputs are, \(x_{1}=s_{1}\) (total temperature at fan inlet), \(x_{2}=s_{2}\) (total temperature at LPC outlet), \(x_{3}=s_{8}\) (physical fan speed), \(x_{4}=s_{13}\) (corrected fan speed), \(x_{5}=s_{14}\) (corrected core speed), and \(x_{6}=s_{19}\) (demanded corrected fan speed). With 9 polynomial basis functions of degree 2, the size of \(\mathbf{A}^{s}\) is \((15\times(6\times 9))\) i.e. \((15\times 54)\). The entire weight matrix is represented through 6 smaller heatmaps of size \(15\times 9\). The first heatmap represents the weights associated with \(x_{1t}\times G(\mathbf{Z}_{t})\) and the hidden units. Similarly, second heat map represents the weights associated with \(x_{2t}\times G(\mathbf{Z}_{t})\) and the hidden units, and so on. It can be observed from Figure 6 that weights associated with sensor 1 data (total temperature at fan inlet), and sensor 2 \begin{table} \begin{tabular}{l l l l} \hline \hline Dataset & Model & RMSE & Score \((s)\) \\ & & (mean, std) & (mean, std) \\ \hline FD001 & CiRNN\_D1 & 14.53, 7.07 & 745.21, 2588.64 \\ & GRU\_D1 & 14.52, 7.64 & 658.55, 1490.477 \\ & **GRU\_D1\_CxF** & **13.57, 7.19** & **450.69, 733.00** \\ \hline FD002 & **CiRNN\_D2** & **11.97, 4.94** & **363.03, 710.70** \\ & CiRNN\_D2\_CxF & 25.74, 10.11 & 4123.71, 8035.64 \\ & GRU\_D2 & 26.57, 7.78 & 3156.17, 5158.81 \\ & GRU\_D2\_CxF & 25.83, 10.04 & 4122.89, 7878.20 \\ & GRU\_D2\_CxN & 12.49, 4.91 & 417.26, 841.84 \\ & GRU\_D2\_CxN\_CxF & 12.61, 5.09 & 470.85, 1006.85 \\ \hline FD003 & **CiRNN\_D3** & **12.87, 10.77** & **949.39, 2437.71** \\ & GRU\_D3 & 13.23, 11.26 & 1633.70, 7559.84 \\ & **GRU\_D3\_CxF** & **12.90, 3.39** & **902.63, 2339.36** \\ \hline FD004 & **CiRNN\_D4** & **12.35, 3.06** & **377.94, 353.19** \\ & CiRNN\_D4\_CxF & 23.11, 8.91 & 3730.99, 15989.09 \\ & GRU\_D4 & 22.87, 10.92 & 4604.87, 13398.62 \\ & GRU\_D4\_CxF & 21.92, 9.55 & 3539.59, 12060.03 \\ & GRU\_D4\_CxN & 16.91, 4.70 & 797.56, 1360.32 \\ & GRU\_D4\_CxN\_CxF & 17.75, 4.29 & 788.20, 678.51 \\ \hline \hline \end{tabular} \end{table} Table 6: Comparison of CiRNN performance with baseline models \begin{table} \begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c}{FD001} & \multicolumn{2}{c}{FD003} \\ \cline{2-5} & RMSE & Score (s) & RMSE & Score (s) \\ \hline LSTM+ FNN & 16.14 & 338 & 16.18 & 852 \\ [Zheng et al., 2017] & & & & \\ CNN + FNN & 12.61 & 274 & 12.64 & 284 \\ [Li et al., 2018] & & & & \\ RBM + LSTM & **12.56** & **231** & **12.10** & 251 \\ [Listou Ellefsen et al., 2019] & & & & \\ LSTM + Attention & 13.95 & 320 & 12.72 & **223** \\ [da Costa et al., 2019] & & & & \\ CEEMD + LSTM & 14.72 & 262 & 17.72 & 452 \\ [Zhao et al., 2019] & & & & \\ CiRNN (Proposed) & **14.53** & **451** & **12.87** & **949** \\ \hline \hline \end{tabular} \end{table} Table 7: Comparison of CiRNN performance with existing approaches reported in the literature (FD001 and FD003) (total temperature at LPC outlet) are mostly negative. However, hidden unit 10 has connections with positive weights. The connections from sensor 8 data (physical fan speed) has more positive weights compared to remaining 5 sensors. This indicates that physical fan speed has more impact on update gate output and in turn RUL prediction. ## 5 Conclusion In this paper, we proposed a novel RNN architecture, named CiRNN, that enables integrating explicit contexts available from the problem domain. In CiRNN, the input features referred to as primary features are weighted by context which is Figure 4: Actual versus Predicted RUL with CiRNN using FD004 test data Figure 3: Distribution of RMSE and Score values obtained from (a) CiRNN_D2 with FD002 test set (b) CiRNN_D4 with FD004 test set represented by contextual features. Thus, external factors influences the change in network parameters. To demonstrate the pertinence of CiRNN to applications where contextual information is accessible through sensors, we applied CiRNN to engine health prognostics. In particular, we used CiRNN to estimate the RUL using widely known benchmark dataset (C-MAPSS dataset). The dataset consists of information on operational environment of the flight engines through three settings parameters. This provides the required contextual information for CiRNN. We performed several Figure 5: Actual versus predicted RUL for selected engines (a) and (b) CiRNN with FD001 test data, (c) and (d) CiRNN with FD004 test data \begin{table} \begin{tabular}{l c c c c} \hline \hline \multirow{2}{*}{Method} & \multicolumn{2}{c}{FD002} & \multicolumn{2}{c}{FD004} \\ \cline{2-5} & RMSE & Score (s) & RMSE & Score (s) \\ \hline LSTM+ FNN & 24.49 & 4,450 & 28.17 & 5,550 \\ [Zheng et al., 2017] & & & & \\ CNN + FNN & 22.36 & 10,412 & 23.31 & 12,466 \\ [Li et al., 2018] & & & & \\ RBM + LSTM & 22.73 & 3,366 & 22.66 & 2,840 \\ [Listou Ellefsen et al., 2019] & & & & \\ LSTM + Attention & 17.65 & 2.102 & 20.21 & 3,100 \\ [da Costa et al., 2019] & & & & \\ CEEMD + LSTM & 29.00 & 6953 & 33.43 & 15069 \\ [Zhao et al., 2019] & & & & \\ CiRNN (Proposed) & **11.97** & **363** & **12.35** & **378** \\ \hline \hline \end{tabular} \end{table} Table 8: Comparison of CiRNN performance with existing approaches reported in the literature (FD002 and FD004) experiments using datsets with contexts as well as datasets without only one context is available without any variations. The experimental results show that integrating context to RNN (with GRU cell) significantly improves the performance of the resulting i@RNN model in comparison to the baseline models and also the existing approaches with relatively simple network configuration, particularly when context is present (FD002 and FD004). For datasets that do not exhibit context (FD001 and FD003), i@RNN's performance is comparable to the baseline and existing models. As our environment is becoming data rich, i@RNN can potentially be applied to many real life application, such as in the domain of smart healthcare, smart manufacturing, and smart agriculture, where contextual information can be acquired and used naturally. In future, we intend to explore such areas. ## Acknowledgements This work is supported by CONEX-Plus programme funded by Universidad Carlos III de Madrid and the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No. 801538. ## Appendix A Gradient computation of L2 Loss function with respect to i@RNN parameters Let \(L_{t}\) be the loss at time step \(t\) and \(L\) be the total loss over the given time sequence, which are given as, \[L_{t}=\frac{1}{2}(\mathbf{y}_{t}-\hat{\mathbf{y}}_{t})^{2} \tag{9}\] \[L=\frac{1}{2}\sum_{t}(\mathbf{y}_{t}-\hat{\mathbf{y}}_{t})^{2} \tag{10}\] To train i@RNN, we need the values of all the parameters \((\mathbf{A}^{h},\mathbf{A}^{s},\mathbf{A}^{r})\), \((\mathbf{U}^{h},\mathbf{U}^{s},\mathbf{U}^{r})\), and \((\mathbf{V},\mathbf{b}_{y})\) that minimize the loss \(L\). For gradient-based optimization, the derivative of \(L\) w.r.t. each of the parameters is required. Figure 6: Input-to-hidden weights \(\mathbf{A}^{s}\) Let us first consider \(\partial L/\partial{\bf A}^{s}\). The calculation for the other parameters would be similar. Since \(L=1/2\sum_{t}L_{t}\) and the parameters remain same in each step, we also have \(\partial L/\partial{\bf A}^{s}=\sum_{t}(\partial L_{t}/\partial{\bf A}^{s})\). So, we can calculate \((\partial L_{t}/\partial{\bf A}^{s})\) independently and finally sum them up. \[\frac{\partial L_{t}}{\partial{\bf A}^{s}}=\frac{\partial L_{t}}{\partial{\bf h }_{t}}\frac{\partial{\bf h}_{t}}{\partial{\bf A}^{s}} \tag{11}\] Let us consider each of the derivatives on the R.H.S. of the equation (11). \[\frac{\partial L_{t}}{\partial{\bf h}_{t}}=\frac{\partial L_{t}}{\partial \hat{\bf y}_{t}}\frac{\partial\hat{\bf y}_{t}}{\partial{\bf h}_{t}}=(\hat{\bf y }_{t}-{\bf y}_{t})f^{\prime}(u_{t}){\bf V} \tag{12}\] where \(u_{t}={\bf V}{\bf h}_{t}+{\bf b}_{y}\) In the second term of R.H.S. of equation (11), \({\bf h}_{t}\) also depends on \({\bf A}^{s}\) through \({\bf s_{t}}\) and also depends on \({\bf h}_{t-1}\) that in turn depends \({\bf A}^{s}\). So, \({\bf h}_{t}\) depends on \({\bf A}^{s}\) explicitly and also implicitly through \({\bf h}_{t-1}\), and we can write \(\partial{\bf h}_{t}/\partial{\bf A}^{s}\) as, \[\frac{\partial{\bf h}_{t}}{\partial{\bf A}^{s}}=\frac{\partial{\bf h}_{t}}{ \partial{\bf A}^{s}}^{*}+\frac{\partial{\bf h}_{t}}{\partial{\bf h}_{t-1}} \frac{\partial{\bf h}_{t-1}}{\partial{\bf A}^{s}} \tag{13}\] The explicit derivative is denoted by a '\(*\)' in the above equation where \({\bf h}_{t-1}\) is considered as constant. Again, the last term in equation (13) can be expanded and written in terms of implicit and explicit derivatives as given below: \[\frac{\partial{\bf h}_{t}}{\partial{\bf A}^{s}}=\frac{\partial{\bf h}_{t}}{ \partial{\bf A}^{s}}^{*}+\frac{\partial{\bf h}_{t}}{\partial{\bf h}_{t-1}} \frac{\partial{\bf h}_{t-1}}{\partial{\bf A}^{s}}^{*}+\frac{\partial{\bf h}_ {t}}{\partial{\bf h}_{t-1}}\frac{\partial{\bf h}_{t-1}}{\partial{\bf h}_{t-2}} \frac{\partial{\bf h}_{t-2}}{\partial{\bf A}^{s}}\] This process continues until \(\partial{\bf h}_{1}/\partial{\bf A}^{s}\) is reached. \({\bf h}_{1}\) is implicitly dependent on \({\bf h}_{0}\), however, \({\bf h}_{0}\) is constant and initialized to a vector of zeros. Therefore, equation (13) can be given as, \[\frac{\partial{\bf h}_{t}}{\partial{\bf A}^{s}}=\sum_{k=1}^{t}\frac{\partial{ \bf h}_{t}}{\partial{\bf h}_{k}}\frac{\partial{\bf h}_{k}}{\partial{\bf A}^{s}} ^{*} \tag{14}\] where \(\frac{\partial{\bf h}_{t}}{\partial{\bf h}_{k}}\) is a chain rule in itself, for example, \(\frac{\partial{\bf h}_{3}}{\partial{\bf h}_{1}}=\frac{\partial{\bf h}_{3}}{ \partial{\bf h}_{2}}\ \frac{\partial{\bf h}_{2}}{\partial{\bf h}_{1}}\). Further, equation (14) can be given as, \[\frac{\partial{\bf h}_{t}}{\partial{\bf A}^{s}}=\sum_{k=1}^{t}\left(\left(\prod _{k=1}^{t-1}\frac{\partial{\bf h}_{k+1}}{\partial{\bf h}_{k}}\right)\frac{ \partial{\bf h}_{k}}{\partial{\bf A}^{s}}^{*}\right) \tag{15}\] Using equations (11), (12), and (15), \(\partial L_{t}/\partial{\bf A}^{s}\) can be given as, \[\frac{\partial L_{t}}{\partial{\bf A}^{s}}=(\hat{\bf y}_{t}-{\bf y}_{t})f^{ \prime}(u_{t}){\bf V}\sum_{k=1}^{t}\left(\left(\prod_{k=1}^{t-1}\frac{\partial {\bf h}_{k+1}}{\partial{\bf h}_{k}}\right)\frac{\partial{\bf h}_{k}}{\partial{ \bf A}^{s}}^{*}\right) \tag{16}\] The terms within the summation are derived as follows: \[\frac{\partial{\bf h}_{t}}{\partial{\bf A}^{s}}^{*}=\left(({\bf h}_{t-1}-\tilde {\bf h}_{t})\odot{\bf s}_{t}\odot(1-{\bf s}_{t})\right)({\bf x}_{t}\otimes{\bf G }(t))^{T} \tag{17}\] \[\frac{\partial{\bf h}_{t}}{\partial{\bf h}_{t-1}} =\frac{\partial{\bf h}_{t}}{\partial\tilde{\bf h}_{t}}\frac{ \partial\tilde{\bf h}_{t}}{\partial{\bf h}_{t-1}}+\frac{\partial{\bf h}_{t}}{ \partial{\bf s}_{t}}\frac{\partial{\bf s}_{t}}{\partial{\bf h}_{t-1}}+\frac{ \partial{\bf h}_{t}}{\partial{\bf h}_{t-1}}^{*} \tag{18}\] \[=\frac{\partial{\bf h}_{t}}{\partial\tilde{\bf h}_{t}}\left(\frac{ \partial\tilde{\bf h}_{t}}{\partial{\bf r}_{t}}\frac{\partial{\bf r}_{t}}{ \partial{\bf h}_{t-1}}+\frac{\partial\tilde{\bf h}_{t}}{\partial{\bf h}_{t-1}}^{* }\right)\] \[+\frac{\partial{\bf h}_{t}}{\partial{\bf s}_{t}}\frac{\partial{\bf s }_{t}}{\partial{\bf h}_{t-1}}+\frac{\partial{\bf h}_{t}}{\partial{\bf h}_{t-1}}^{*}\] \[\frac{\partial\mathbf{h}_{t}}{\partial\mathbf{h}_{t-1}} =(1-\mathbf{s}_{t})T_{1}+T_{2}+\mathbf{s}_{t} \tag{19}\] \[T_{1} =(\mathbf{U}^{rT}((\mathbf{U}^{hT}(1-\tilde{\mathbf{h}}_{t}\circ \tilde{\mathbf{h}}_{t}))\odot\mathbf{h}_{t-1}\odot\mathbf{r}_{t}\] \[\odot(1-\mathbf{r}_{t}))+((\mathbf{U}^{hT}(1-\tilde{\mathbf{h}}_ {t}\circ\tilde{\mathbf{h}}_{t}))\odot\mathbf{r}_{t})\] \[T_{2} =\mathbf{U}^{sT}((\mathbf{h}_{t-1}-\tilde{\mathbf{h}}_{t})\odot \mathbf{s}_{t}\odot(1-\mathbf{s}_{t}))\] So far, we have determined all the components required for \(\partial L_{t}/\partial\mathbf{A}^{s}\). The gradient of \(L_{t}\) with respect to other parameters are similar.
2308.08778
Environment Diversification with Multi-head Neural Network for Invariant Learning
Neural networks are often trained with empirical risk minimization; however, it has been shown that a shift between training and testing distributions can cause unpredictable performance degradation. On this issue, a research direction, invariant learning, has been proposed to extract invariant features insensitive to the distributional changes. This work proposes EDNIL, an invariant learning framework containing a multi-head neural network to absorb data biases. We show that this framework does not require prior knowledge about environments or strong assumptions about the pre-trained model. We also reveal that the proposed algorithm has theoretical connections to recent studies discussing properties of variant and invariant features. Finally, we demonstrate that models trained with EDNIL are empirically more robust against distributional shifts.
Bo-Wei Huang, Keng-Te Liao, Chang-Sheng Kao, Shou-De Lin
2023-08-17T04:33:38Z
http://arxiv.org/abs/2308.08778v1
# Environment Diversification with Multi-head Neural Network for Invariant Learning ###### Abstract Neural networks are often trained with empirical risk minimization; however, it has been shown that a shift between training and testing distributions can cause unpredictable performance degradation. On this issue, a research direction, invariant learning, has been proposed to extract invariant features insensitive to the distributional changes. This work proposes EDNIL, an invariant learning framework containing a multi-head neural network to absorb data biases. We show that this framework does not require prior knowledge about environments or strong assumptions about the pre-trained model. We also reveal that the proposed algorithm has theoretical connections to recent studies discussing properties of variant and invariant features. Finally, we demonstrate that models trained with EDNIL are empirically more robust against distributional shifts. ## 1 Introduction Ensuring model performance on unseen data is a common yet challenging task in machine learning. A widely adopted solution would be Empirical Risk Minimization (ERM), where training and testing data are assumed to be independent and identically distributed (i.i.d.). However, data in real-world applications can come with undesired biases, causing a shift between training and testing distributions. It has been known that the distributional shifts can severely harm ERM model performance and even cause the trained model to be worse than random predictions [10]. In this work, we focus on _Invariant Learning_, which aims at learning invariant features expected to be robust against distributional changes. Invariant Risk Minimization (IRM) [1] has been proposed as a popular solution for invariant learning. Specifically, IRM is based on an assumption that training data are collected from multiple sources or _environments_ having distinct data distributions. The learning objective is then designed as a standard ERM loss function with a penalty term constraining the trained model (e.g. classifier) to be optimal in all the environments. IRM has shown to be effective; however, we note that IRM and many invariant learning methods rely on strict assumptions which limit the practical impacts. The limitations are summarized as follows. Prior Knowledge of EnvironmentsIRM assumes training data are collected from environments, and the environment labels (i.e. which data instance belongs to which environment) are given. However, the environment labels are often unavailable. Moreover, the definition of environments can be implicit, making human labeling more difficult and expensive. To find environments without supervision, Creager et al. [6] propose Environment Inference for Invariant Learning (EIIL), a min-max optimization framework training the model of interest and inferring the environment labels. Another work, Heterogeneous Risk Minimization (HRM) [22], or its extension called Kernelized Heterogeneous Risk Minimization (KerHRM) [23], parameterizes the environments and proposes clustering-based approaches to estimate the parameters. A recent method, ZIN [20], also learns to label data. However, it relies on carefully chosen features satisfying a series of theoretical constraints, and thus human efforts are still required. **Delicate Initialization**EIIL is able to infer environments but requires an ERM model for initialization. Crucially, the ERM model should heavily depend on spurious correlations. Creager et al. [6] reveal that, for example, slightly underfitted ERM models may encode more spurious relationships in some cases. However, as the distributional shifts are assumed to be unknown in the training stage, appropriate initialization might be difficult to guarantee. **Efficiency Issue**HRM and KerHRM, though do not possess the above two limitations, suffer from the efficiency issue. Specifically, HRM is assumed to be trained on raw features. As for KerHRM, although it extends HRM to avoid the issue of representation learning by adopting kernel methods, the computational costs of the proposed method can be very high if the data or model size is large. This work proposes a novel framework, **E**nvironment **D**iversification with multi-head neural **N**etwork for **I**nvariant **L**earning (EDNIL). EDNIL is able to infer environment labels without supervision and achieve joint optimization of environment inference and invariant learning models. The underlying multi-head neural network explicitly diversifies the inferred environments, which is consistent with recent studies [5; 22; 23] revealing the benefits of diverse environments. Notably, the proposed neural network is functionally similar to a multi-class classifier and can be optimized efficiently. The advantages of EDNIL are summarized as: * We implement this framework using various pre-trained models such as Resnet [14] and DistilBert [30], and evaluate it with diverse data types and varied biases. The results show that EDNIL can constantly outperform the existing state-of-the-art methods. * The learning algorithm of EDNIL has theoretical connections to recent studies [5; 20; 22; 23] discussing conditions of ideal environments. * EDNIL mitigates the three limitations discussed above. The comparisons between EDNIL and other methods are shown in Table 1. ## 2 Preliminaries and Related Work The goal of EDNIL is to tackle out-of-distribution problems with invariant learning in the absence of manual environment labels. In Section 2.1, background knowledge about out-of-distribution generalization and invariant learning are introduced. In Section 2.2, we discuss recent studies investigating ideal environments. In Section 2.3, we introduce the existing unsupervised methods inferring environments. ### Out-of-Distribution Generalization and Invariant Learning Following [1; 22], we consider a dataset \(D=\{D^{e}\}_{e\in\mathrm{supp}}(\mathcal{E}_{\texttt{v}})\) with different sources \(D^{e}=\{(x_{\texttt{v}}^{e},y_{\texttt{v}}^{e})\}_{i=1}^{n_{e}}\) collected under multiple training environments \(e\in\mathrm{supp}(\mathcal{E}_{\texttt{v}})\). Random variable \(\mathcal{E}_{\texttt{tr}}\) indicates training environments. For simplicity, \(X^{e}\), \(Y^{e}\) and \(P^{e}\) denote features, target and distribution in environment \(e\), respectively. With \(\mathcal{E}_{\texttt{all}}\) containing all possible environments such that \(\mathrm{supp}(\mathcal{E}_{\texttt{all}})\supset\mathrm{supp}(\mathcal{E}_{ \texttt{tr}})\), the goal of out-of-distribution generalization is to learn a predictor \(f(\cdot):\mathcal{X}\rightarrow\mathcal{Y}\) as Equation 1, where \begin{table} \begin{tabular}{c c c c} & Unsupervised1 & Insensitive Initialization & Efficiency \\ \hline IRM [1] & ✗ & ✓ & ✓ \\ ZIN [20] & ✗ & ✓ & ✓ \\ EIIL [6] & ✓ & ✗ & ✓ \\ HRM [22] & ✓ & ✓ & ✗ \\ KerHRM [23] & ✓ & ✓ & ✗ \\ **EDNIL (Ours)** & ✓ & ✓ & ✓ \\ \end{tabular} \end{table} Table 1: A summary of the advantages of invariant learning methods. \(\mathbb{E}_{X^{\text{c}},Y^{e}}[l(f(X^{\text{c}}),Y^{e})]=\mathbb{E}^{e}[l(f(X^{ \text{c}}),Y^{e})]\) measures the risk under environment \(e\) with any loss function \(l(\cdot,\cdot)\). In general, for \(e\in\operatorname{supp}(\mathcal{E}_{\text{tr}})\) and \(e^{\prime}\in\operatorname{supp}(\mathcal{E}_{\text{all}})\setminus \operatorname{supp}(\mathcal{E}_{\text{tr}})\), \(P^{e^{\prime}}(X,Y)\) is rather different from \(P^{e}(X,Y)\). \[f=\arg\min_{f}\max_{e\in\operatorname{supp}(\mathcal{E}_{\text{all}})}R^{e}(f) \tag{1}\] Recently, several studies [1; 4; 18; 26; 28] have attempted to tackle the generalization problems by discovering invariant relationships across all environments. A commonly proposed assumption is the existence of _invariant features_\(X_{\text{c}}\) and _variant features_\(X_{\text{v}}\). Specifically, raw features \(X\) are assumed to be composed of \(X_{\text{c}}\) and \(X_{\text{v}}\), or \(X=h(X_{\text{c}},X_{\text{v}})\) where \(h(\cdot)\) is a transformation function. Invariant features \(X_{\text{c}}\) are assumed to be equally informative for predicting targets \(Y\) across environments \(e\in\operatorname{supp}(\mathcal{E}_{\text{all}})\). On the contrary, the distribution \(P^{e}(Y|X_{\text{v}})\) can arbitrarily vary across \(e\). As a result, predictors depending on \(X_{\text{v}}\) can have unpredictable performance in unseen environments. In particular, the correlations between \(X_{\text{v}}\) and \(Y\) are known as spurious and unreliable. To extract \(X_{\text{c}}\), IRM [1] assumes there is an encoder \(\Phi\) for obtaining representations \(\Phi(X)\approx X_{\text{c}}\). The encoder is trained with a regularization term enforcing simultaneous optimality of the predictor \(w\circ\Phi\) in training environments, where dummy classifier \(w=1.0\) is a fixed multiplier for the encoder outputs: \[\sum_{e\in\operatorname{supp}(\mathcal{E}_{\text{tr}})}R^{e}(\Phi)+\lambda|| \nabla_{w|w=1.0}R^{e}(w\circ\Phi)||^{2} \tag{2}\] As \(w\) is a dummy layer, the encoder \(\Phi\) is also regarded as a predictor. ### Ideal Environments As \(\mathcal{E}_{\text{tr}}\) is unavailable or sub-optimal in most applications, learning to find appropriate environments (denoted by \(\mathcal{E}_{\text{learn}}\)) is attractive. However, the challenge is the lack of knowledge of valid environments. Recently, Lin et al. [20] have proposed Equation 3 and 4 as the conditions of ideal environments, where \(H\) is conditional entropy. To satisfy the conditions, Lin et al. [20] propose leveraging auxiliary information for model training. However, the method still requires extra human efforts to collect and verify the additional information. \[H(Y|X_{\text{c}})=H(Y|X_{\text{c}},\mathcal{E}_{\text{learn}}) \tag{3}\] \[H(Y|X_{\text{v}})-H(Y|X_{\text{v}},\mathcal{E}_{\text{learn}})>0 \tag{4}\] Particularly, Equation 4 can be implied by empirical studies [5] where diversity of environments is recognized as the key to obtaining effective IRM models. To be more precise, large discrepancy of spurious correlations, or \(P^{e}(Y|X_{\text{v}})\), between environments is favored. As the environments give a clear indication of distributional shifts, IRM can easily identify and eliminate variant features. Beyond IRM, HRM [22] and KerHRM [23] can also be viewed as optimizing diversity via clustering-based methods specifically. ### Unsupervised Environment Inference Here we provide a detailed introduction of the existing environment inference methods that do not require extra human efforts. The general idea is to integrate environment inference with invariant learning algorithms who require provided environments. EIIL [6] proposes formulating invariant learning as a min-max optimization problem. Specifically, EIIL is composed of two objectives, _Environment Inference_ (\(EI\)) and _Invariant Learning_ (\(IL\)), where \(EI\) is optimized by maximizing the training penalty via labeling the data, and \(IL\) is optimized by minimizing the training loss given the data labeled by \(EI\). The two-stage framework bypasses the difficulty of defining environments; however, the training result heavily relies on the initialization of the \(EI\) optimization. Specifically, the initialization demands a strongly biased ERM reference model; otherwise, EIIL can have a significantly weaker performance. Another method, HRM [22], proposes a clustering-based method for learning plausible environments. HRM assumes that spurious correlations in each environment can be modeled by a parameterized function and the dataset is generated by the mixture of the functions. The parameters are then estimated by employing EM algorithm. Additionally, HRM equips a joint learning framework which alternatively learns invariant predictors and improves quality of clustering results. However, a known issue of HRM is an assumption that the data are represented by raw features. Data such as images and texts requiring non-linear neural networks to obtain representations are beyond the capability. To extend HRM to a broader class of applications and improve the model performance, Liu et al. [23] propose KerHRM. The main idea is to adopt the Neural Tangent Kernel [15] method which transforms non-linear neural network training into a linear regression problem on the proposed Neural Tangent Features space. As a result, KerHRM elegantly resolves the shortcomings of HRM and is shown to be more effective. However, the proposed method and its implementations bring additional computational costs depending on data and model capacity. For applications favoring large datasets and pre-trained models, such as Resnet [14] and BERT [7], KerHRM may not be an affordable option at the present stage. ## 3 Methodology In this section, we propose a general framework to learn invariant model without manual environment labels. As shown in Figure 0(a), our proposed method consists of two models, \(M_{\text{EI}}\) and \(M_{\text{IL}}\). Given the pooled data \((X,Y)\), \(M_{\text{EI}}\) infers environments \(\mathcal{E}_{\text{learn}}\) satisfying Condition 3 and 4, and \(M_{\text{IL}}\) is an invariant model trained with the inferred environments. Our framework is jointly optimized with alternating updates. The learned \(M_{\text{IL}}\) can provide information of invariant features to \(M_{\text{EI}}\), so that Condition 3 and 4 can be fulfilled simultaneously. Note that \(M_{\text{EI}}\) only serves at train time to provide environments for invariant learning. At test time, only \(M_{\text{IL}}\) is needed to perform invariant predictions. ### The Environment Inference Model The target of environment inference is to partition data into environments \(\mathcal{E}_{\text{learn}}\) satisfying Condition 3 and 4. In this regard, we propose a graphical model, which is a sufficient condition for Condition 3 and 4 (the proof is in Appendix A), as our foundation of inference model and learning objectives. The graph is shown in Figure 2, where the data generation process follows the proposed example in [1]. The inference model, \(M_{\text{EI}}\), is a neural network learning to realize the underlying graphical model. Following the idea of parameterizing environments from HRM [22] and KerHRM [23], we assume the distinct mapping relation between \(X\) and \(Y\) in environment \(e\) can be modeled by a function \(f^{e}(\Psi(X))\), where \(\Psi(X)\) is learned representations expected to encode variant features \(X_{\text{v}}\) and \(f^{e}\) is an environmental function responsible for predicting \(Y\). Instead of employing clusters, we propose building a multi-head neural network as shown in Figure 0(b); particularly, a single-head network Figure 1: Concepts of EDNIL. (a) The joint learning framework of EDNIL. The environment inference model \(M_{\text{EI}}\), containing a variant encoder \(\Psi\) and environmental functions \(f^{e}\), is trained with \(L_{\text{EI}}=L_{\text{ED}}+\beta L_{\text{LI}}+\gamma L_{\text{IP}}\). The invariant learning model \(M_{\text{IL}}\), containing an invariant predictor \(\Phi\), is trained with \(L_{\text{IL}}\). (b) The multi-head network structure of the environment inference model \(M_{\text{EI}}\). Figure 2: The graphical model. in \(M_{\text{EI}}\) with shared parameters corresponds to a cluster center in HRM or KerHRM. The training procedure of \(M_{\text{EI}}\) can be divided into _inference stage_ and _learning stage_. #### 3.1.1 Inference Stage of \(\boldsymbol{M_{\text{EI}}}\) The goal is to infer an environment label for each training data. As in the graphical model, \(\mathcal{E}_{\text{learn}}\) is associated with variant relationships. Inspired by multi-class classification problem, we propose Equation 5, where the probability \(P(e\mid X,Y)\) is estimated via a softmax of negative \(l(f^{e}(\Psi(X)),Y)\) divided by a constant temperature \(\tau\). The function \(l\) is expected to be the commonly used cross entropy or mean squared error that measures the discrepancy between \(Y\) and \(f^{e}(\Psi(X))\) for each environment \(e\). Intuitively, each data prefers the environment whose model has better prediction. \[P(\mathcal{E}_{\text{learn}}=e\mid X,Y)=\frac{\exp{(-l(f^{e}(\Psi(X)),Y)/\tau)} }{\sum\limits_{e^{\prime}\in\operatorname{supp}(\mathcal{E}_{\text{learn}})} \exp{(-l(f^{e^{\prime}}(\Psi(X)),Y)/\tau)}} \tag{5}\] #### 3.1.2 Learning Stage of \(\boldsymbol{M_{\text{EI}}}\) The goal is to update the neural network to improve the quality of inference. Based on the structure of the graphical model, three losses are designed for minimization, i.e. _Environment Diversification Loss_ (\(L_{\text{ED}}\)), _Label Independence Loss_ (\(L_{\text{LI}}\)) and _Invariance Preserving Loss_ (\(L_{\text{IP}}\)). In particular, \(L_{\text{ED}}\) and \(L_{\text{IP}}\) correspond to the concepts of Condition 4 and 3, respectively. Environment Diversification Loss (\(L_{\text{ED}}\))We consider maximizing \(H(Y|X_{\text{v}})-H(Y|X_{\text{v}},\mathcal{E}_{\text{learn}})\) to satisfy Condition 4 and capture more diverse variant relationships. Given the estimated \(P(\mathcal{E}_{\text{learn}}|X,Y)\), \(L_{\text{ED}}\) selects the most probable environment and its corresponding network for optimization: \[L_{\text{ED}}=-\sum_{i}w_{i}\max_{e}[\log P(e\mid x_{i},y_{i})] \tag{6}\] For each data \((x_{i},y_{i})\), although only one environment \(e_{i}=\operatorname*{arg\,max}_{e}P(e\mid x_{i},y_{i})\) is selected for the minimization, the softmax simultaneously propagates gradient to maximize \(l(f^{e^{\prime}}(\Psi(x_{i})),y_{i})\) for \(e^{\prime}\neq e_{i}\). The network learns to maximize the dependency between \(\mathcal{E}_{\text{learn}}\) and \(Y\) given variant representations. In terms of spurious correlations, the distinction between environments is expected to become clearer accordingly. In practice, we utilize scaling weight \(w_{i}\) inversely proportional to the size of \(e_{i}\). The importance of smaller environments will be thus enhanced within the summation. Label Independence Loss (\(L_{\text{LI}}\))With d-separation [25], \(\mathcal{E}_{\text{learn}}\) is independent of \(Y\) in the graphical model. Hence, \(L_{\text{LI}}\) constraints their dependency measured by mutual information \(I(Y;\mathcal{E}_{\text{learn}})\). It can be verified that minimizing \(I(Y;\mathcal{E}_{\text{learn}})\) is equivalent to minimizing Equation 7 given \(P(Y)\) is known. Empirically, \(L_{\text{LI}}\) prevents a trivial solution that environments are determined purely by target labels regardless of input features, which is undesirable for invariant learning. \[L_{\text{LI}}=\mathbb{E}_{e\sim P(\mathcal{E}_{\text{learn}})}[\sum_{y}P(y|e) \log P(y|e)] \tag{7}\] Invariance Preserving Loss (\(L_{\text{IP}}\))For Condition 3, as \(M_{\text{IL}}\) learns some invariant relationships after several training steps (Section 3.2), \(L_{\text{IP}}\) can be considered to exclude invariant features from the diversification. Specifically, designed as the contrary of \(L_{\text{ED}}\), \(L_{\text{IP}}\) limits the variance of expected loss from invariant predictor \(\Phi\) (in \(M_{\text{IL}}\)) across environments (Equation 8). Similar idea can be found in [17]. However, instead of training an invariant model given known environments, we freeze the invariant predictor and regularize the adjustment of \(\mathcal{E}_{\text{learn}}\) (i.e. the updates of \(M_{\text{EI}}\)) here. \[L_{\text{IP}}=\text{Var}_{e\sim P(\mathcal{E}_{\text{learn}})}[\mathbb{E}^{e}[l (\Phi(X^{e}),Y^{e})]] \tag{8}\] In summary, the training loss of \(M_{\text{EI}}\) can be summarized as _Environment Inference Loss_ (\(L_{\text{EI}}\)). The regularization strengths of \(L_{\text{LI}}\) and \(L_{\text{IP}}\) can be controlled by hyper-parameters \(\beta\) and \(\gamma\), respectively: \[L_{\text{EI}}=L_{\text{ED}}+\beta L_{\text{LI}}+\gamma L_{\text{IP}} \tag{9}\] In addition, before minimizing \(L_{\text{EI}}\), we pre-train our \(\Psi\) and one arbitrary \(f^{e}\) with ERM. In general, it empirically facilitates better feature extraction. Unlike EIIL [6] taking ERM as a reference model heavily relying on variant features, EDNIL performs more consistently under various choices of ERM. Namely, the initialization of EDNIL can be more arbitrary than that of EIIL. We verify the argument in Section 4. ### The Invariant Learning Model To identify invariance across environments, IRM [1] is selected as our base algorithm optimizing the invariant predictor \(\Phi\) in our model \(M_{\text{IL}}\). As for the required environment partitions during training, we assign environment label \(e\in\operatorname{supp}(\mathcal{E}_{\text{learn}})\) with largest \(P(e|x_{i},y_{i})\), inferred by \(M_{\text{EI}}\) (Section 3.1.1), to each data \((x_{i},y_{i})\). However, it is inevitable that there exist some noises in automatically inferred environments, especially in the beginning of joint optimization. To reduce the impact of immature environments on invariant learning, we calculate confidence score \(c_{e}=\mathbb{E}^{e}[P(e|X^{e},Y^{e})]\) for each environment \(e\in\operatorname{supp}(\mathcal{E}_{\text{learn}})\). Our training objective is modified to minimize _Invariant Learning Loss_ (\(L_{\text{IL}}\)) that considers the weighted average of environmental losses: \[L_{\text{IL}}=\sum_{e\in\operatorname{supp}(\mathcal{E}_{\text{learn}})}w_{e }\cdot[R^{e}(\Phi)+\lambda||\nabla_{w|w=1.0}R^{e}(w\circ\Phi)||^{2}] \tag{10}\] \[w_{e}=\frac{c_{e}}{\sum_{e^{\prime}\in\operatorname{supp}(\mathcal{E}_{ \text{learn}})}c_{e^{\prime}}} \tag{11}\] ## 4 Experiments We empirically validate the proposed method on biased datasets, Adult-Confounded, CMNIST, Waterbirds and SNLI. The definition of spurious correlations mainly follows the protocols proposed by [1; 6; 9; 29]. In Section 4.1, Adult-Confounded and CMNIST are tested with Multilayer Perceptron (MLP). In Section 4.2, two more complex datasets, Waterbirds and SNLI, are used to evaluate the integration of transfer learning. Deep pre-trained models will be fine-tuned to extract representations. The following four methods are selected as our competitors: Empirical Risk Minimization (ERM), Environment Inference for Invariant Learning (EIIL [6]), Kernelized Heterogeneous Risk Minimization (KerHRM [23]) and Invariant Risk Minimization (IRM [1], Equation 2). EIIL and KerHRM are invariant learning methods with unsupervised environment inference, which share the same settings as EDNIL. HRM [22] is replaced by KerHRM for non-linearity. For IRM who requires environment partitions, we re-label \(\mathcal{E}_{\text{oracle}}\) on each given biased training set, which diversifies the spurious relationships to elicit upper-bound performance of IRM. For hyper-parameter tuning, we utilize an in-distribution validation set composed of 10% of training data. In each dataset, several testing environments with different distributions are listed to evaluate the robustness of each method, and we mainly take worst-case performance for assessment. As all tasks in this section are classification problems, accuracy is adopted as the evaluation metric. Besides, more experimental details are revealed in Appendix B. We also discuss additional experiments in Appendix C, including solutions to regression problem and model stability given different spurious strengths at train time. ### Simple Datasets with MLP This section includes two simple datasets, Adult-Confounded and CMNIST, where spurious correlations are synthetically produced with the predefined strengths. For all competitors, MLP is taken as the base model and full-batch training is implemented. Since KerHRM performs unstably over random seeds, we first average the results after 10 runs as its first score, and select top 5 among them as the second one, which will be marked with an asterisk (*) in each table. #### 4.1.1 Discussions on Adult-Confounded We take UCI Adult [16] to predict binarized income levels (above or below $50,000 per year)2. Following [6], individuals are re-sampled according to sensitive features _race_ and _sex_ to simulate spurious correlations. Specifically, with binarized _race_ (Black/Non-Black) and _sex_ (Female/Male), four possible subgroups are constructed: Non-black Males (SG1), Non-black Females (SG2), Black Males (SG3), and Black Females (SG4). Keeping original train/test split and subgroup sizes from UCI Adult, we sample data based on the given target distributions in each sensitive subgroup as shown in Table 2. In particular, OOD contributes the worst-case performance to validate if the predictions rely on group information. In this task, MLP with one hidden layer of 96 neurons is considered. For IRM, four environments comprise \(\mathcal{E}_{\text{oracle}}\), where the correlations between variant features \((race,sex)\) and target \(Y\) are distributed without overlapping. More details are provided in Appendix B. ResultsThe results are shown in Table 3. With strong spurious correlations at train time, ERM obtains high accuracy as the correlations remain aligned; however, its generalization to other testing distributions is limited. Among all invariant learning methods without prior environment labels, EDNIL can perfectly identify variant features and infer ideally diversified environments. Therefore, it achieves the most invariant performance over different testing distributions. On the other hand, EIIL can improve consistency to some degree, but not as strong as EDNIL. One possible reason is that empirically trained reference model is not guaranteed to be purely variant [6]. For KerHRM, it performs inconsistently across random seeds, which is reflected in the large standard deviation. In some cases, the performance hardly improves over iterations, as observed by Liu et al. [23]. Ablation Study for \(L_{\text{EI}}\)We first claim the importance of \(L_{\text{LI}}\), which constrains label dependency, by setting the coefficient \(\beta\) to zero. As discussed in Section 3, the resulting environments are determined purely by target labels, leading to inferior performance as shown in Table 3. Next, we demonstrate the effectiveness of joint optimization in Figure 2(a). The regularization \(L_{\text{IP}}\) promotes environment inference by considering existing invariant relationships, so that the worst-case performance improves and remains stable over iterations. According to Table 3, if the coefficient \(\gamma\) is turned off, feedback generated by \(M_{\text{IL}}\) will be ignored and the effect of invariant learning may become undesirable. #### 4.1.2 Discussions on CMNIST We report our evaluation on a noisy digit recognition dataset, CMNIST. Following [1], we first assign \(Y=1\) to those whose digits are smaller than 5 and \(Y=0\) to the others. Next, we apply label noise by randomly flipping \(Y\) with probability 0.2. Finally, the digits are colored with color labels \(C\), which are generated by randomly flipping \(Y\) with probability \(e\). For training, two equal-sized environments with \(e=0.1\) and \(e=0.2\) are merged, which is equivalent to one with \(e=0.15\) on average. For testing, three situations are considered when \(e\) is 0.1, 0.5 or 0.9, respectively. Note that when \(e=0.1\), the spurious correlation is much aligned with the training set. On the other hand, \(e=0.9\) defines the most challenging case since the spurious correlation shifts most dramatically from training. For all competitors except KerHRM, we select MLP with two hidden layers of 390 neurons, and consider the whole dataset (50,000 samples) for training. For KerHRM who requires massive computing resources, we follow the settings recommended by [23]. Specifically, we randomly select 5,000 samples and train MLP with one hidden layer of 1024 neurons. To construct ideally diversified \(\mathcal{E}_{\text{oracle}}\) for IRM, we pack all examples with \(Y=C\) into one environments, and \(Y\neq C\) into the other. ResultsThe results are shown in Table 4. First of all, not surprisingly ERM still adopts poorly to distributional shifts. Among all invariant learning methods without manual environment labels, EDNIL gets closest to IRM with \(\mathcal{E}_{\text{oracle}}\), achieving consistent and robust performance in this dataset. As shown in Figure 2(c), EDNIL provides almost ideally diversified \(\mathcal{E}_{\text{learn}}\) for invariant learning. \begin{table} \begin{tabular}{c c c c c} \hline \hline & Train & \begin{tabular}{c} Test \\ (IID) \\ \end{tabular} & \begin{tabular}{c} Test \\ (IND) \\ \end{tabular} & \begin{tabular}{c} Test \\ (OOD) \\ \end{tabular} \\ \hline SG1 & 0.9 & 0.9 & 0.5 & 0.1 \\ SG2 & 0.1 & 0.1 & 0.5 & 0.9 \\ SG3 & 0.9 & 0.9 & 0.5 & 0.1 \\ SG4 & 0.1 & 0.1 & 0.5 & 0.9 \\ \hline \hline \end{tabular} \end{table} Table 2: \(P(Y=1|SG)\) for Adult-Confounded. IID shares spurious correlations with the train set. IND has no bias on _race_ and _sex_. OOD defines the worst-case performance. \begin{table} \begin{tabular}{c c c c} \hline \hline & \multicolumn{1}{c}{IID} & IND & **OOD** \\ \hline ERM & \(\mathbf{92.4\pm 0.1}\) & \(\mathbf{66.8\pm 0.3}\) & \(\mathbf{40.7\pm 0.5}\) \\ \hline EIIL & \(\mathbf{76.2\pm 0.4}\) & \(\mathbf{73.5\pm 0.5}\) & \(\mathbf{70.2\pm 1.7}\) \\ KerHRM & \(\mathbf{82.4\pm 3.9}\) & \(\mathbf{75.1\pm 4.0}\) & \(\mathbf{67.9\pm 9.3}\) \\ KerHRM\({}^{*}\) & \(\mathbf{81.2\pm 1.8}\) & \(\mathbf{78.5\pm 0.3}\) & \(\mathbf{75.6\pm 1.9}\) \\ EDNIL & \(\mathbf{80.7\pm 0.4}\) & \(\mathbf{79.1\pm 0.4}\) & \(\mathbf{77.5\pm 0.3}\) \\ EDNIL\({}_{\beta=0}\) & \(\mathbf{91.8\pm 0.0}\) & \(\mathbf{66.7\pm 0.1}\) & \(\mathbf{41.3\pm 0.7}\) \\ EDNIL\({}_{\gamma=0}\) & \(\mathbf{78.2\pm 2.4}\) & \(\mathbf{75.4\pm 1.6}\) & \(\mathbf{72.5\pm 3.3}\) \\ \hline IRM & \(\mathbf{79.9\pm 0.4}\) & \(\mathbf{79.3\pm 0.3}\) & \(\mathbf{78.8\pm 0.4}\) \\ \hline \hline \end{tabular} \end{table} Table 3: Testing accuracy (%) on Adult-Confounded. Three subsets are defined in Table 2. \(\beta=0\) and \(\gamma=0\) indicate the removal of \(L_{\text{LI}}\) and \(L_{\text{IP}}\) when training \(M_{\text{EI}}\). Number of EnvironmentsWhile the predefined number of environments is a hyperparameter that requires tuning, as illustrated in Figure 2(b), EDNIL can demonstrate a certain level of tolerance towards this parameter. Specifically, when the environment number is larger than the oracle (i.e. 2), some environment classifiers become redundant. Each of them provides a moderate constant loss, taking up fixed and ignorable space in the softmax function. The visualization of \(\mathcal{E}_{\text{learn}}\) with 5 available environments is shown in Figure 2(c). Additionally, training \(M_{\text{EI}}\) in EDNIL with more environments is much more efficient than clustering-based methods, such as the one proposed in KerHRM. ### Complex Datasets with Pre-trained Deep Learning Models This section extends MLP to deep learning models with pre-trained weights for more complex data. With mini-batch fine-tuning, we consider all competitors but KerHRM due to its efficiency issue. In Section 4.2.1, image dataset, Waterbirds [29], with controlled spurious correlations is selected for evaluating the generalization on more high-dimensional images. In Section 4.2.2, a real-world NLP dataset, SNLI [3], is considered. The biases in SNLI are naturally derived from the procedure of data collection, and we define biased subsets for evaluation following Dranker et al. [9]. #### 4.2.1 Discussions on Waterbirds In Waterbirds [29], each bird photograph, from CUB dataset [31], is combined with one background image, from Places dataset [33]. Both birds and backgrounds are either from land or water, and our target is to predict the binarized species of birds. At train time, landbirds and waterbirds are frequently present in land and water backgrounds, respectively. Therefore, empirically trained models are prone to learn context features, and fail to generalize as background varies [2; 6; 10; 21; 29]. To split an in-distribution validation set, we merge original training and validation data3 and split 10% for hyper-parameter tuning. For testing, we observe all four combinations of birds and backgrounds in the original testing set. Among them, the minor subgroup (waterbirds on land) contributes the most challenging case. In this task, Resnet-34 [14] is chosen for mini-batch fine-tuning. Given target \(Y\) and background \(BG\), we distribute \(Y=BG\) and \(Y\neq BG\) into two different environments and apply balanced class weights for the oracle settings of IRM. Footnote 3: In the original training split, backgrounds are unequally distributed in each class. However, they are equally distributed in the original validation split. ResultsThe results are shown in Table 5. As observed in [6; 29], ERM suffers in the hardest case (i.e. waterbirds on land). E�IL also performs poorly in this case. With a more sophisticated learning framework, EDNIL narrows the gaps between subgroups and raises the worst-case performance. The results show that EDNIL can be more resistant to distributional shifts. \begin{table} \begin{tabular}{c c c c} \hline \hline Color Noise & 0.1 & 0.5 & **0.9** \\ \hline ERM & **88.4 \(\pm\) 0.3** & 55.0 \(\pm\) 0.5 & 21.7 \(\pm\) 0.8 \\ \hline E�IL & 79.6 \(\pm\) 0.3 & 71.7 \(\pm\) 0.7 & 63.1 \(\pm\) 0.5 \\ KerHRM & 74.3 \(\pm\) 0.7 & 66.2 \(\pm\) 1.7 & 58.0 \(\pm\) 11.5 \\ KerHRM* & 71.3 \(\pm\) 0.7 & 68.5 \(\pm\) 0.5 & 66.1 \(\pm\) 0.7 \\ EDNIL & 77.7 \(\pm\) 0.4 & **76.8 \(\pm\) 0.3** & **75.2 \(\pm\) 0.4** \\ \hline IRM & 77.8 \(\pm\) 0.4 & 76.8 \(\pm\) 0.4 & 75.2 \(\pm\) 0.3 \\ \hline \hline \end{tabular} \end{table} Table 4: Testing accuracy (%) of CMNIST, where color noise 0.9 makes the worst-case environment. Figure 3: Analysis results. (a) Ablation study of joint optimization on Adult-Confounded, where \(\gamma=0\) means the removal of \(L_{\text{IP}}\). (b) CMNIST Performance of EDNIL with different configured numbers of environments. (c) Comparison between \(\mathcal{E}_{\text{learn}}\) inferred by EDNIL and the oracle environments \(\mathcal{E}_{\text{oracle}}\) on CMNIST. (d) Testing E�IL’s and EDNIL’s sensitivity to initialization on Waterbirds. Choice of InitializationBoth EIIL and EDNIL take ERM as initialization. As mentioned in Section 1, heavy dependency on initialization is risky when testing distribution is unavailable. Therefore, we take ERM with different training steps for EIIL and EDNIL to verify the stability. The results in Figure 3d reveal the consistency of EDNIL, which accentuates our strength of less sensitive initialization. As implied by [6], EIIL could fail with a more well-fitted reference model in this case since ERM might get distracted from variant features. One is prone to be misled into an undesirable choice for EIIL when seeking hyper-parameters without prior knowledge of distributional shifts. For instance, the validation score of EIIL with a 500-step reference model (95.8%) slightly surpasses that of the 100-step model (94.7%), but their performance on testing exhibits significant discrepancy. On top of that, the selection of an effective reference model could be more intractable when spurious correlations in data are relatively weak. Supporting evidence can be observed in Appendix C.2. #### 4.2.2 Discussions on SNLI The target of SNLI [3] is to predict the relation between two given sentences, premise and hypothesis. Recent studies [12; 24; 27] reveal hypothesis bias in SNLI, which is characterized by patterns in hypothesis sentences highly correlated with a specific label. One can achieve low empirical risk without considering premises during prediction. However, as the bias no longer holds, the performance degradation occurs [11; 24]. We sample 100,000 examples and consider all classes, _entailment_, _neutral_ and _contradiction_, for our experiment. Following [9], we define three subsets by training a biased model with hypothesis as its only input. The specification of the subsets is as follows: * Unbiased: Examples whose predictions from the biased model are ambiguous * Aligned: Examples that the biased model can predict correctly with high confidence * Misaligned: Examples that the biased model can predict incorrectly with high confidence The proportions of the three subsets are 17%, 67% and 16%, respectively. Due to the minority, the bias misaligned subset is more likely to be ignored and thus defines the worst-case performance. For all methods, DistilBERT [30] is taken as the pre-trained model for further mini-batch fine-tuning. For \(\mathcal{E}_{\text{oracle}}\), we assign the bias aligned subset to the first environment, and the misaligned one to the second. In order to make bias prevalence equal in the two environments, unbiased samples are scattered proportionally to the two environments. ResultsThe results are shown in Table 6. As reported by [9], ERM receives higher score on the bias aligned subset, but it fails in the bias misaligned case. Among all invariant learning methods without environment labels, only EDNIL improves on the bias misaligned subset. Namely, even though the definitions of biases are at a high level, EDNIL is still capable of encoding and eliminating possible variant features. \begin{table} \begin{tabular}{c c c c} \hline \hline Subset & Unbiased & Aligned & **Misaligned** \\ \hline ERM & \(\mathbf{74.6\pm 0.3}\) & 94.7 \(\pm\) 0.2 & 52.6 \(\pm\) 0.9 \\ \hline EIIL & \(74.2\pm 0.3\) & \(\mathbf{95.0\pm 0.1}\) & 51.7 \(\pm\) 1.3 \\ EDNIL & \(74.3\pm 0.8\) & 94.2 \(\pm\) 0.2 & \(\mathbf{54.5\pm 1.0}\) \\ \hline IRM & \(74.0\pm 0.9\) & 92.3 \(\pm\) 0.5 & 56.9 \(\pm\) 1.1 \\ \hline \hline \end{tabular} \end{table} Table 6: Testing accuracy (%) on SNLI, where the misaligned defines the worst-case performance. \begin{table} \begin{tabular}{c c c c c} \hline \hline (Y, BG) & (Land, Land) & (Water, Water) & (Land, Water) & **(Water, Land)** \\ \hline ERM & 99.4 \(\pm\) 0.0 & \(\mathbf{91.4\pm 0.2}\) & \(\mathbf{90.9\pm 0.8}\) & 72.8 \(\pm\) 1.0 \\ \hline EIIL & \(\mathbf{99.4\pm 0.3}\) & 90.5 \(\pm\) 1.8 & 89.3 \(\pm\) 3.7 & 68.6 \(\pm\) 5.4 \\ EDNIL & 98.5 \(\pm\) 0.6 & 89.9 \(\pm\) 1.5 & 90.3 \(\pm\) 3.0 & \(\mathbf{78.6\pm 4.3}\) \\ \hline IRM & 98.0 \(\pm\) 0.5 & 90.6 \(\pm\) 1.1 & 89.5 \(\pm\) 1.7 & 83.2 \(\pm\) 2.2 \\ \hline \hline \end{tabular} \end{table} Table 5: Testing accuracy (%) of Waterbirds. The subgroup (Water, Land) contributes the worst-case performance. Limitation Our learning algorithm for environment inference is based on the graphical model plotted in Figure 2. However, it is important to acknowledge that data may not always conform to the assumed process, resulting in potential biases that cannot be adequately captured by the proposed neural network. In the paper, while we provide empirical studies of effectiveness on diverse datasets, we are still aware that a stronger guarantee of performance is required. ## 6 Conclusion and Societal Impacts This work proposes EDNIL for training models invariant to distributional shifts. To infer environments without supervision, we propose a multi-head neural network structure to identify and diversify plausible environments. With joint optimization, the resulting invariant models are shown to be more robust than existing solutions on data having distinct characteristics and different strengths of biases. We attribute the effectiveness to the underlying learning objectives, which are consistent with recent studies of ideal environments. Additionally, we note that the classifier-like structure of environment inference model makes EDNIL easy to combine with off-the-shelf pre-trained models and trained more efficiently. Our contributions to invariant learning have broader societal impacts on numerous domains. For instance, it can encourage further research and real-world applications on debiasing machine learning systems. The identification and elimination of potential biases can facilitate more robust model training. It can be beneficial to many real applications where distributional shifts commonly occur, such as autonomous driving, social media and healthcare. Furthermore, as discussed by [6], invariant learning can promote algorithmic fairness in some ways. In particular, our empirical achievements on Adult-Confounded can prevent discrimination against sensitive demographic subgroups in decision-making process. It shows that EDNIL has a potential to learn a fair predictor without prior knowledge of sensitive attributes, which is related to [13; 19; 32]. We expect that one can extend our work to more fairness benchmarks and criteria in the future. Last but not least, it is worth mentioning some cautions, however. Since the invariant learning algorithm claims to find invariant relationships, one might cast more attention on feature importance of the invariant model and even incorporate the results into further research or applications. Nevertheless, the results are reliable only when the model is trained appropriately. Insufficient data collection or careless training process, for example, can certainly affect the identification of invariant features, and thus mislead experimental findings. As a result, we believe that adequate and careful preparations and analyses are essential before drawing conclusions from the inferred invariant relationships. ## Acknowledgments and Disclosure of Funding We would like to thank the anonymous reviewers for their helpful suggestions. This material is based upon work supported by National Science and Technology Council, ROC under grant number 111-2221-E-002 -146 -MY3 and 110-2634-F-002-050 -.
2307.05426
Using BOLD-fMRI to Compute the Respiration Volume per Time (RTV) and Respiration Variation (RV) with Convolutional Neural Networks (CNN) in the Human Connectome Development Cohort
In many fMRI studies, respiratory signals are unavailable or do not have acceptable quality. Consequently, the direct removal of low-frequency respiratory variations from BOLD signals is not possible. This study proposes a one-dimensional CNN model for reconstruction of two respiratory measures, RV and RVT. Results show that a CNN can capture informative features from resting BOLD signals and reconstruct realistic RV and RVT timeseries. It is expected that application of the proposed method will lower the cost of fMRI studies, reduce complexity, and decrease the burden on participants as they will not be required to wear a respiratory bellows.
Abdoljalil Addeh, Fernando Vega, Rebecca J Williams, Ali Golestani, G. Bruce Pike, M. Ethan MacDonald
2023-07-03T18:06:36Z
http://arxiv.org/abs/2307.05426v1
Using BOLD-fMRI to compute the Respiration Volume per Time (RTV) and Respiration Variation (RV) with Convolutional Neural Networks (CNN) in the Human Connectome Development Cohort 1 Using BOLD-fMRI to compute the Respiration Volume per Time (RTV) and Respiration Variation (RV) with Convolutional Neural Networks (CNN) in the Human Connectome Development Cohort 1 Footnote 1: 2023 International Society of Magnetic Resonance in Medicine. Toronto, Canada, June 2-9. Abstract Number 2709 Abdoljalil Addeh Fernando Vega Rebecca J Williams Ali Golestani Alberta Children's Hospital Research Institute, University of Calgary, Alberta, Canada G. Bruce Pike M. Ethan MacDonald Department of Biomedical Engineering, Schulich School of Engineering, University of Calgary, Calgary, AB, Canada ###### Synopsis In many fMRI studies, respiratory signals are unavailable or do not have acceptable quality. Consequently, the direct removal of low-frequency respiratory variations from BOLD signals is not possible. This study proposes a one-dimensional CNN model for reconstruction of two respiratory measures, RV and RVT. Results show that a CNN can capture informative features from resting BOLD signals and reconstruct realistic RV and RVT timeseries. It is expected that application of the proposed method will lower the cost of fMRI studies, reduce complexity, and decrease the burden on participants as they will not be required to wear respiratory bellows. ## Introduction Low-frequency variation in breathing rate and depth during functional magnetic resonance imaging (fMRI) scanning can alter cerebral blood flow and consequently, the blood oxygen level-dependent (BOLD) signal. Over the past decade, respiratory response function models [1, 2, 3, 4] have shown good performance in modelling the confounds induced by low-frequency respiratory variation. While convolution models perform well, they require the collection of external signals that can be cumbersome and prone to error in the MR environment. Respiratory data is not routinely recorded in many fMRI experiments due to the absence of measurement equipment in the scanner suite, insufficient time for set-up, financial concerns, or other reasons [5, 6, 7]. Even in the studies where respiratory timeseries are measured, a large portion of the recorded signals don't have acceptable quality, particularly in pediatric populations [8]. This work proposes a method for estimation of the two main respiratory timeseries used to correct fMRI, including respiration variation (RV) and respiratory volume per time (RVT), directly from BOLD fMRI data. ## Method In this study, we used resting-state fMRI scans and the associated respiratory belt traces in Human Connectome Project in Development (HCP-D) dataset (from HCD0001305 to HCD2996590), where participants are children in the age range of 5 and 21 years [9; 10]. From 2451 scans, 306 scans were selected based on the quality of their respiratory signals. Fig. 1 shows some examples of the usable scans (12.4%) and unusable scans (87.6%). The RVT is the difference between the maximum and minimum belt positions at the peaks of inspiration and expiration, divided by the time between the peaks of inspiration [1], and RV is defined as the standard deviation of the respiratory waveform within a six-second sliding window centered at each time of point [11]. In the proposed method, a CNN is applied in the temporal dimension of the BOLD time series for RV and RVT reconstruction. To decrease computational complexity, the average BOLD signal time series from 90 functional regions of interest (ROI) [12] were used as the main inputs. Fig. 2 shows the model input and output for RV estimation. For each RV point, BOLD signals centered at the RV point, covering 32 TRs before and after were used as the input. The output of the model can be defined as any point of the moving window. In this paper, we implemented two approaches: middle point of the window (Method 1 in Figure 2), and end point of the window (Method 2 in Figure 2). In Method 1, CNN can use both past and future information, but in Method 2 it can only use the past information hidden in the BOLD signals. The same procedure is applied for RVT estimation. A ten-fold cross-validation strategy was employed to evaluate the robustness of the proposed model. The performance of each fold was evaluated based on mean absolute error (MAE), mean squared error (MSE), coefficient of determination of the prediction (\(R^{2}\)), and dynamic time warping (DTW). ## Results Fig. 3 shows the performance of the proposed method considering the middle point of the window as the network's output on two samples from the test subset. The trained network can reconstruct the RV and RVT timeseries well, especially when there are big changes in RV and RVT value. Large changes in RV and RVT can happen when the subject takes a deep breath or if their breathing pattern changes. Variations in breathing pattern induce a confound to the BOLD signal and the CNN model can use that information to reconstruct the RV and RVT timeseries. Figure 1: Example of usable and unusable respiratory signals. a), b), c) Signal with removable spikes (HCD0271031-2-PA; HCD0008117-2-PA; HCD0001305-2-PA), d) Signal with unremovable spikes marked by color dashes (HCD2335344-2-PA), e) partially recorded signal (HCD1209536-1-AP), f) Not recorded (signal with zero amplitude, HCD0110411-2-AP) and signal varying only in a small range (HCD0694564-1-PA), g) not sampled properly (HCD0146937-1-AP), h) connections changed at a certain point (HCD1240530-1-AP). Figure 3: Performance of the proposed method when middle point of the moving window with size 64 is selected as CNN’s output. In the HCP-D project, the number of TRs is 478, and using window size 64 leads to loss of 32 points at the beginning and end of the respiratory measures. RV and RVT timeseries with higher values and fluctuations are reconstructed with higher accuracy, and RV and RVT timeseries having nearly constant values are difficult cases for CNN to reconstruct. Figure 2: Input and output in the proposed method for RV estimation. A respiratory signal is a [152960 \(\times\) 1] vector, and RV is a [number of fMRI volumes - window size] vector. Window size determines the length of the input data and number of averaged BOLD signals in each ROI determines the number of channels. Therefore, each input data had a size of [64 \(\times\) 90], where 64 is the window size and 90 is the number of ROIs. This broad range for window size is chosen to encompass both immediate and longer-range respiratory changes. Similar system is designed for RVT estimation. Fig. 4 shows the performance of the CNN considering the middle point of the window as the CNN's output in terms of MAE, MSE, correlation, and DTW. As the performance is a function of the variation, no one performance metric is satisfactory. Fig. 5 shows the impact of output point location on the network's performance, whether estimating the point at the middle or end of the window. The obtained results showed that using both sides, before and after the current breath, leads to better performance. Figure 4: Performance of the proposed method shown as box plots of MAE, MSE, correlation, and DTW. Since the MSE squares the errors, it ensures that we don’t have any outlier predictions with large errors. As pediatric population is characterized by higher respiration rates and more abnormal breathing patterns, MSE is a more appropriate choice for loss function. Therefore, we used MSE as the loss function in these experiments. In general, differences among metrics implies that using one loss function for training a machine learning model is not sufficient. Figure 5: Impact of output point location, middle or end point of the moving window, on network’s performance. There is a noticeable difference between two approaches which implies the importance of using input information properly. ## Discussion The current work demonstrates the ability to compute both the RV and RVT signals from the fMRI data alone in a pediatric population, which eliminates the need for an external respiratory measurement device and reduces cost. In this paper, we used a window size of 64, but this is an adjustable parameter. Larger window sizes provide more information about baseline breathing rate and depth and enable the model to provide better estimates of variation. An important limitation of larger window sizes is that they result in more time points discarded at the beginning and end of the scan due to edge-effects. This may limit the minimum duration of scans that can be reasonably reconstructed with the proposed approach, but for longer scans, a larger window might be acceptable. For fMRI studies with short scan times, the proposed method using a window size of 16 or 32 will be a reasonable choice with a fair performance. There is potential to enrich a large volume of existing resting-state fMRI datasets through retrospective addition of respiratory signal variation information, which is an interesting potential application of the developed method. ## Acknowledgements: The authors would like to thank the University of Calgary, in particular the Schulich School of Engineering and Departments of Biomedical Engineering and Electrical & Software Engineering; the Cumming School of Medicine and the Departments of Radiology and Clinical Neurosciences; as well as the Hotchkiss Brain Institute, Research Computing Services and the Digital Alliance of Canada for providing resources. The authors would like to thank the Human Connectome Project for making the data available. JA - is funded in part from a graduate scholarship from the Natural Sciences and Engineering Research Council Brain Create. GBP acknowledges support from the Campus Alberta Innovates Chair program, the Canadian Institutes for Health Research (FDN-143290), and the Natural Sciences and Engineering Research Council (RGPIN-03880). MEM acknowledges support from Start-up funding at UCalgary and a Natural Sciences and Engineering Research Council Discovery Grant (RGPIN-03552) and Early Career Researcher Supplement (DGECR-00124).
2303.16813
Optimal approximation using complex-valued neural networks
Complex-valued neural networks (CVNNs) have recently shown promising empirical success, for instance for increasing the stability of recurrent neural networks and for improving the performance in tasks with complex-valued inputs, such as in MRI fingerprinting. While the overwhelming success of Deep Learning in the real-valued case is supported by a growing mathematical foundation, such a foundation is still largely lacking in the complex-valued case. We thus analyze the expressivity of CVNNs by studying their approximation properties. Our results yield the first quantitative approximation bounds for CVNNs that apply to a wide class of activation functions including the popular modReLU and complex cardioid activation functions. Precisely, our results apply to any activation function that is smooth but not polyharmonic on some non-empty open set; this is the natural generalization of the class of smooth and non-polynomial activation functions to the complex setting. Our main result shows that the error for the approximation of $C^k$-functions scales as $m^{-k/(2n)}$ for $m \to \infty$ where $m$ is the number of neurons, $k$ the smoothness of the target function and $n$ is the (complex) input dimension. Under a natural continuity assumption, we show that this rate is optimal; we further discuss the optimality when dropping this assumption. Moreover, we prove that the problem of approximating $C^k$-functions using continuous approximation methods unavoidably suffers from the curse of dimensionality.
Paul Geuchen, Felix Voigtlaender
2023-03-29T15:56:43Z
http://arxiv.org/abs/2303.16813v2
# Optimal approximation of \(C^{k}\)-functions using shallow complex-valued neural networks ###### Abstract. We prove a quantitative result for the approximation of functions of regularity \(C^{k}\) (in the sense of real variables) defined on the complex cube \(\Omega_{n}:=[-1,1]^{n}+i[-1,1]^{n}\subseteq\mathbb{C}^{n}\) using shallow complex-valued neural networks. Precisely, we consider neural networks with a single hidden layer and \(m\) neurons, i.e., networks of the form \(z\mapsto\sum_{j=1}^{m}\sigma_{j}\cdot\phi\big{(}\rho_{j}^{T}z+b_{j}\big{)}\) and show that one can approximate every function in \(C^{k}\left(\Omega_{n};\mathbb{C}\right)\) using a function of that form with error of the order \(m^{-k/(2n)}\) as \(m\to\infty\), provided that the activation function \(\phi:\mathbb{C}\to\mathbb{C}\) is smooth but not polyharmonic on some non-empty open set. Furthermore, we show that the selection of the weights \(\sigma_{j},b_{j}\in\mathbb{C}\) and \(\rho_{j}\in\mathbb{C}^{n}\) is continuous with respect to \(f\) and prove that the derived rate of approximation is optimal under this continuity assumption. We also discuss the optimality of the result for a possibly _discontinuous_ choice of the weights. Key words and phrases:Complex-valued neural networks, Shallow neural networks, Continuously differentiable functions, approximation rates 2020 Mathematics Subject Classification: 68T07, 41A25, 41A30, 41A63, 31A30, 30E10 Both authors contributed equally to this work. the network. Here he assumed that the activation function is smooth on an open interval, where at a point of this interval no derivative vanishes. This is equivalent to the activation function being smooth and non-polynomial on that interval [12, p. 53]. We are going to generalize this result to the setting of complex-valued networks, by proving that one can approximate every function in \(C^{k}\left(\Omega_{n};\mathbb{C}\right)\) with an error of the order \(m^{-k/(2n)}\) using shallow complex-valued neural networks with \(m\) neurons in the hidden layer. Here we define the set \(\Omega_{n}\) as \(\Omega_{n}:=[-1,1]^{n}+i[-1,1]^{n}\). The result holds whenever the activation function is smooth and non-polyharmonic on some non-empty open set. This is a very natural condition, since for polyharmonic activation functions there exist \(C^{k}\)-functions that cannot be approximated at all below some error threshold using neural networks with this activation function [29]. Furthermore, the present paper studies in how far the approximation order of \(m^{-k/(2n)}\) is _optimal_, meaning that an order of \(m^{-(k/2n)-\alpha}\) (where \(\alpha>0\)) cannot be achieved. Here it turns out that the derived order of approximation is indeed optimal in the setting that the weight selection is _continuous_, meaning that the map that assigns to a function in \(C^{k}\left(\Omega^{n};\mathbb{C}\right)\) its corresponding network weights is continuous with respect to some norm on \(C^{k}(\Omega_{n};\mathbb{C})\). We are also going to investigate this optimality result further by dropping the continuity assumption and constructing two special smooth and non-polyharmonic activation functions with the first one having the property that the order of approximation can indeed be strictly increased via a discontinuous selection of the related weights. For the second function we are going to show that the order of \(m^{-k/(2n)}\) cannot be improved, even if one allows a discontinuous weight selection. This in particular shows that in the given generality of arbitrary smooth, non-polyharmonic activation functions, the approximation rate \(m^{-k/(2n)}\) cannot be improved. ### Main Results In this work we are going to deal with so-called shallow complex-valued neural networks, meaning complex-valued neural networks with a single hidden layer. Furthermore, we are only considering networks where the map connecting the hidden with the final layer is not just affine-linear but even linear, and where the output dimension of the network is \(1\). Precisely, this means that we consider functions of the form \[\mathbb{C}^{n}\ni z\mapsto\sum_{j=1}^{m}\sigma_{j}\phi\left(\rho_{j}^{T}\cdot z +\eta_{j}\right)\in\mathbb{C},\] with \(\rho_{1},...,\rho_{m}\in\mathbb{C}^{n}\), \(\sigma_{1},...,\sigma_{m},\eta_{1},...,\eta_{m}\in\mathbb{C}\) and an _activation function_\(\phi:\mathbb{C}\rightarrow\mathbb{C}\). Here, \(m\) denotes the number of neurons of the network. Our first main result is that it is possible to approximate complex polynomials in \(z\) and \(\overline{z}\) using shallow complex-valued neural networks. Precisely, for \(m\in\mathbb{N}\) let \[\mathcal{P}_{m}:=\left\{\mathbb{C}^{n}\rightarrow\mathbb{C},\ z\mapsto\sum_{ \mathbf{m}\leq m}\sum_{\boldsymbol{\ell}\leq m}a_{\mathbf{m},\boldsymbol{\ell} }z^{\mathbf{m}}\overline{z}^{\boldsymbol{\ell}}:\ a_{\mathbf{m},\boldsymbol{ \ell}}\in\mathbb{C}\right\},\] where we are summing over all multi-indices \(\mathbf{m},\boldsymbol{\ell}\in\mathbb{N}_{0}^{n}\) with \(\mathbf{m}_{j},\boldsymbol{\ell}_{j}\leq m\) for every \(j\in\{1,...,n\}\). Note that \(\mathcal{P}_{m}\) is a finite-dimensional space, so that it makes sense to talk about bounded subsets of \(\mathcal{P}_{m}\). Then we show the following: **Theorem 1.1**.: _Let \(m,n\in\mathbb{N}\), \(\varepsilon>0\) and \(\phi:\mathbb{C}\rightarrow\mathbb{C}\) be smooth and non-polyharmonic on an open set \(\varnothing\neq U\subseteq\mathbb{C}\). Let \(\mathcal{P}^{\prime}\subseteq\mathcal{P}_{m}\) be bounded and set \(N:=(4m+1)^{2n}\). Then there exist certain coefficients \(\rho_{1},...,\rho_{N}\in\mathbb{C}^{n}\) and \(b_{m}\in U\) with the following property: For each polynomial \(p\in\mathcal{P}^{\prime}\) there are coefficients \(\sigma_{1},...,\sigma_{N}\in\mathbb{C}\), such that_ \[\left\|p-f\right\|_{L^{\infty}(\Omega_{n};\mathbb{C})}\leq\varepsilon,\] _where_ \[f:\quad\Omega_{n}\rightarrow\mathbb{C},\quad z\mapsto\sum_{j=1}^{N}\sigma_{ j}\phi\left(\rho_{j}^{T}z+b_{m}\right). \tag{1.1}\] Note that \(N\) only depends on the degree of the polynomials, not on the desired degree of approximation and that the coefficients \(\rho_{j}\) are chosen independently from the specific polynomial \(p\). Using this approximation result and the fact that \(C^{k}\)-functions can be approximated at a certain rate by complex polynomials in \(z\) and \(\overline{z}\), we can then study the approximation of arbitrary \(C^{k}\)-functions. **Theorem 1.2**.: _Let \(n,k\in\mathbb{N}\). Then there exists a constant \(c=c(n,k)>0\) with the following property: For any function \(\phi:\mathbb{C}\rightarrow\mathbb{C}\) that is smooth and non-polyharmonic on an open set \(\varnothing\neq U\subseteq\mathbb{C}\) and for any \(m\in\mathbb{N}\) there are coefficients \(\rho_{1},...,\rho_{m}\in\mathbb{C}^{n}\) and \(b_{m}\in U\) with the following property: For any \(f\in C^{k}\left(\Omega_{n};\mathbb{C}\right)\) there are complex numbers \(\sigma_{1}=\sigma_{1}(f),...,\sigma_{m}=\sigma_{m}(f)\in\mathbb{C}\), such that_ \[\left\|f-g\right\|_{L^{\infty}\left(\Omega_{n};\mathbb{C}\right)}\leq c\cdot m ^{-k/(2n)}\cdot\left\|f\right\|_{C^{k}\left(\Omega_{n};\mathbb{C}\right)},\] _where \(g\) is defined as_ \[g:\quad\Omega_{n}\rightarrow\mathbb{C},\quad g(z):=\sum_{j=1}^{m}\sigma_{j} \phi\left(\rho_{j}^{T}z+b_{m}\right).\] _Furthermore, the map \(f\mapsto\sigma_{j}(f)\) is a continuous linear functional with respect to the \(L^{\infty}\)-norm for every \(j\in\{1,...,m\}\)._ Discussing the optimality of this result, based on work by DeVore et al. [10] we prove the following result. **Theorem 1.3**.: _Let \(n\in\mathbb{N}\) and \(k\in\mathbb{N}\). Then there exists a constant \(c=c(n,k)>0\) with the following property: For any \(m\in\mathbb{N}\), \(\phi\in C(\mathbb{C};\mathbb{C})\) and any map_ \[\eta:\quad C^{k}\left(\Omega_{n};\mathbb{C}\right)\rightarrow(\mathbb{C}^{n}) ^{m}\times\mathbb{C}^{m}\times\mathbb{C}^{m},\quad g\mapsto\left(\eta_{1}(g), \eta_{2}(g),\eta_{3}(g)\right)\] _which is continuous with respect to some norm on \(C^{k}(\Omega_{n};\mathbb{C})\) there exists \(f\in C^{k}\left(\Omega_{n};\mathbb{C}\right)\) satisfying \(\left\|f\right\|_{C^{k}\left(\Omega_{n};\mathbb{C}\right)}\leq 1\) and_ \[\left\|f-\Phi(f)\right\|_{L^{\infty}\left(\Omega_{n};\mathbb{C}\right)}\geq c \cdot m^{-k/(2n)},\] _where \(\Phi(f)\in C(\Omega_{n};\mathbb{C})\) is given by_ \[\Phi(f)(z):=\sum_{j=1}^{m}\left(\eta_{3}(f)\right)_{j}\phi\left(\left[\eta_{1 }(f)\right]_{j}^{T}z+\left(\eta_{2}(f)\right)_{j}\right).\] This result shows that the approximation order of \(m^{-k/(2n)}\) is optimal if one insists on a continuous selection of the weights. For the case of possibly _discontinuous_ weight selection we deduce the following two results: **Theorem 1.4**.: _There exists a function \(\phi\in C^{\infty}(\mathbb{C};\mathbb{C})\) which is non-polyharmonic with the following additional property: For every \(k\in\mathbb{N}\) and \(n\in\mathbb{N}\) there exists a constant \(c=c(n,k)>0\) such that for any \(m\in\mathbb{N}\) and \(f\in C^{k}\left(\Omega_{n};\mathbb{C}\right)\) there are \(\rho_{1},...,\rho_{m}\in\mathbb{C}^{n}\) and \(\ell_{1},...,\ell_{m}\in\mathbb{N}\), such that_ \[\left\|f(z)-\sum_{j=1}^{m}\phi\left(\rho_{j}^{T}z+3\ell_{j}\right)\right\|_{L^ {\infty}\left(\Omega_{n};\mathbb{C}\right)}\leq c\cdot m^{-k/(2n-1)}\cdot \left\|f\right\|_{C^{k}\left(\Omega_{n};\mathbb{C}\right)}.\] **Theorem 1.5**.: _Let \(n\in\mathbb{N}\), \(k\in\mathbb{N}\) and_ \[\phi:\quad\mathbb{C}\rightarrow\mathbb{C},\quad\phi(z):=\frac{1}{1+e^{- \operatorname{Re}(z)}}.\] _In particular, \(\phi\) is smooth but non-polyharmonic. Then there exists a constant \(c=c(n,k)>0\) with the following property: For any \(m\in\mathbb{N}^{\geq 2}\) there is a function \(f\in C^{k}\left(\Omega^{n};\mathbb{C}\right)\) with \(\left\|f\right\|_{C^{k}\left(\Omega_{n};\mathbb{C}\right)}\leq 1\), such that for every choice of coefficients \(\rho_{1},...,\rho_{m}\in\mathbb{C}^{n}\), \(\eta_{1},...,\eta_{m}\in\mathbb{C}\) and \(\sigma_{1},...,\sigma_{m}\in\mathbb{C}\) we have_ \[\left\|f(z)-\sum_{j=1}^{m}\sigma_{j}\cdot\phi\left(\rho_{j}^{T}z+\eta_{j} \right)\right\|_{L^{\infty}\left(\Omega_{n};\mathbb{C}\right)}\geq c\cdot \left(m\cdot\ln(m)\right)^{-k/2n}.\] The significance of these results is as follows: The approximation rate provided by Theorem 1.2 can be strictly improved for some special activation functions (e.g., for the function \(\phi\) from Theorem 1.4), but this improvement is not generally possible for arbitrary (smooth, non-polyharmonic) activation functions; there are such activation functions for which the order of \(m^{-k/(2n)}\) is optimal (up to logarithmic factors), even if discontinuous weight selection is allowed. ### Related Work The main inspiration for the present work is the paper [22] by Mhaskar, where the real-valued counterpart of Theorem 1.2 is stated and proven. Many techniques in the present paper are in fact adaptations of what has been done in [22]. In particular noteworthy are the multidimensional divided differences formula used to approximate polynomials and the approximation of \(C^{k}\)-functions using Fourier-analytic results and Chebyshev polynomials. In addition, the work [21] should be mentioned, which serves as an inspiration for Theorem 1.4. In [21] it was proven that there exists a real activation function such that one can approximate continuous functions on the unit cube arbitrarily well using neural networks of _constant size_ (only depending on the input dimension) which have two hidden layers. Suitable adaptation of the techniques used in [21] in combination with a result from [14] is the basis for our proof of Theorem 1.4. Another important inspiration for the present work is [31], in which the approximation properties of real-valued networks using the ReLU activation function are considered. In particular, this paper also distinguishes between a continuous and discontinuous selection of network weights, but generally considers networks with more than one hidden layer. In the setting of [31], the differences between continuous and discontinuous weight assignment are significant: while with continuous weight assignment at most a rate of \(W^{-k/n}\) is possible (and in fact achievable), with discontinuous weight assignment an improvement of the rate to \(W^{-2k/n}\) is possible. Here \(k\) is, as usual, the regularity of the functions under consideration, \(n\) the input dimension and \(W\) the number of parameters of the network. When it comes to general literature about mathematical properties of complex-valued neural networks, surprisingly little work can be found. The most prominent result in the theory of CVNNs is the _Universal Approximation Theorem for Complex-Valued Neural Networks_[29], discussing for which activation functions CVNNs are uniform approximators on compact sets. In particular, it has been shown that shallow CVNNs have this property if and only if the activation function \(\phi\) is not polyharmonic. Thus, the condition assumed in this work is quite natural. Another result from the field of approximation theory of CVNNs can be found in [24]. Here it is proven that shallow CVNNs with _entire_ functions as activation functions can uniformly approximate exactly those continuous functions on compact sets which can also be uniformly approximated by complex polynomials on compact sets. Note that, unlike in the real setting, this does not correspond to the entirety of continuous functions. For example, it is not possible to uniformly approximate the function \(z\mapsto\overline{z}\) on compact sets with non-empty interior using holomorphic functions (in particular with polynomials). Regarding quantitative approximation results for CVNNs, the only existing work of which we are aware is [8], analyzing the approximation capabilities of _deep_ CVNNs where the modReLU is used as activation function. Since the modReLU satisfies our condition on an admissible activation function (as shown in Section 5), the present work can be seen as an improvement to [8]. The main differences can be summarized as follows: * We consider very general activation functions including but not limited to the modReLU. * We improve the approximation order by a log factor. * We show that the order of approximation can in fact be achieved using shallow networks instead of the deep networks used in [8]. ### Notation Throughout the paper, multi-indices (i.e. elements of \(\mathbb{N}_{0}^{n}\)) are denoted using boldface. For \(\mathbf{m}\in\mathbb{N}_{0}^{n}\) we have the usual notation \(|\mathbf{m}|=\sum_{j=1}^{n}\mathbf{m}_{j}\). For a number \(m\in\mathbb{N}_{0}\) and another multi-index \(\mathbf{p}\in\mathbb{N}_{0}^{n}\) we write \(\mathbf{m}\leq m\) iff \(\mathbf{m}_{j}\leq m\) for every \(j\in\{1,...,n\}\) and \(\mathbf{m}\leq\mathbf{p}\) iff \(\mathbf{m}_{j}\leq\mathbf{p}_{j}\) for every \(j\in\{1,...,n\}\). Furthermore we write \[\begin{pmatrix}\mathbf{P}\\ \mathbf{r}\end{pmatrix}=\prod_{j=1}^{n}\begin{pmatrix}\mathbf{p}_{j}\\ \mathbf{r}_{j}\end{pmatrix}\] for two multi-indices \(\mathbf{p},\mathbf{r}\in\mathbb{N}_{0}^{n}\) with \(\mathbf{r}\leq\mathbf{p}\). For a complex vector \(z\in\mathbb{C}^{n}\) we write \[z^{\mathbf{m}}=\prod_{j=1}^{n}z_{j}^{\mathbf{m}_{j}}\quad\text{and}\quad \overline{z}^{\mathbf{m}}=\prod_{j=1}^{n}\overline{z_{j}}^{\mathbf{m}_{j}}.\] For a point \(x\in\mathbb{R}^{n}\) and \(r>0\) we define \[B_{r}(z):=\{y\in\mathbb{R}^{n}:\ \left\|x-y\right\|<r\}\] with \(\left\|\cdot\right\|\) denoting the usual Euclidian distance. This definition is analogously transferred to \(\mathbb{C}^{n}\). Furthermore, for any function \(f:\mathbb{R}^{n}\to\mathbb{R}^{m}\) and a subset \(A\subseteq\mathbb{R}^{n}\) we denote \[\left\|f\right\|_{L^{\infty}(A;\mathbb{R}^{m})}:=\sup_{x\in A}\left\|f(x) \right\|.\] Note that this is finite if \(f\) is continuous and \(A\) compact. This definition is also analogously transferred to \(\mathbb{C}^{n}\) and \(\mathbb{C}^{m}\). For a function \(f:U\to\mathbb{R}^{m}\) with an open set \(U\subseteq\mathbb{R}^{n}\) we write \[f\in C^{k}\left(U;\mathbb{R}^{m}\right)\ \Leftrightarrow\ \partial^{\mathbf{p}}f\text{ exists and is continuous for every }\mathbf{p}\in\mathbb{N}_{0}^{n}\text{ with }\left|\mathbf{p}\right|\leq k.\] \(\mathbb{C}^{n}\) is canonically isomorphic to \(\mathbb{R}^{2n}\) by virtue of the isomorphism \[\varphi_{n}:\quad\mathbb{R}^{2n}\to\mathbb{C}^{n},\quad(x_{1},...,x_{n},y_{1},...,y_{n})\mapsto(x_{1}+iy_{1},...,x_{n}+iy_{n})\,. \tag{1.2}\] Let \(U\subseteq\mathbb{C}^{n}\) be any subset of \(\mathbb{C}^{n}\) and \(f:U\to\mathbb{C}\). Then the function \(\widetilde{f}:\varphi_{n}^{-1}(U)\to\mathbb{R}^{2}\) associated to \(f\) is defined as \[\widetilde{f}:=\varphi_{1}^{-1}\circ f\circ\varphi_{n}\big{|}_{\varphi_{n}^{- 1}(U)}.\] If \(U\) is open, we can study real differentiability of \(\widetilde{f}\). We write \[f\in C^{k}(U;\mathbb{C}):\Leftrightarrow\widetilde{f}\in C^{k}\left(\varphi_{ n}^{-1}(U);\mathbb{R}^{2}\right).\] If \(f\in C^{k}(U;\mathbb{C})\) and \(\mathbf{p}\in\mathbb{N}_{0}^{2n}\) is a multi-index with \(\left|\mathbf{p}\right|\leq k\), we denote \[\partial^{\mathbf{p}}f:=\varphi_{1}\circ\partial^{\mathbf{p}}\widetilde{f} \circ\varphi_{n}^{-1}\big{|}_{U}.\] In the present work, the goal is to study the approximation capabilities of complex-valued neural networks on the complex cube \[\Omega_{n}:=\{(z_{1},...,z_{n})\in\mathbb{C}^{n}:\ \operatorname{Re}\left(z_{j} \right),\ \operatorname{Im}\left(z_{j}\right)\in[-1,1],\ j=1,...,n\}\,.\] Therefore, we need to introduce \(C^{k}\)-functions on this cube: We define \[C^{k}\left([-1,1]^{n};\mathbb{R}^{m}\right):=\left\{f\in C([-1,1]^{n};\mathbb{ R}^{m}):\begin{array}{c}f\big{|}_{(-1,1)^{n}}\in C^{k}\left((-1,1)^{n}; \mathbb{R}^{m}\right)\text{ and }\\ \partial^{\mathbf{p}}\big{(}f\big{|}_{(-1,1)^{n}}\big{)}\text{ uniformly continuous }\forall\left|\mathbf{p}\right|\leq k \end{array}\right\}.\] For a function \(f\in C^{k}\left([-1,1]^{n};\mathbb{R}^{m}\right)\) and any multi-index \(\mathbf{p}\in\mathbb{N}_{0}^{n}\) with \(\left|\mathbf{p}\right|\leq k\) we define \(\partial^{\mathbf{p}}f\) as the unique continuous extension of \(\partial^{\mathbf{p}}\big{(}f\big{|}_{(-1,1)^{n}}\big{)}\) to \([-1,1]^{n}\) which exists because of the uniform continuity of \(\partial^{\mathbf{p}}\big{(}f\big{|}_{(-1,1)^{n}}\big{)}\). We then define the \(C^{k}\)-norm as \[\left\|f\right\|_{C^{k}\left([-1,1]^{n};\mathbb{R}^{m}\right)}:=\max_{\left| \mathbf{p}\right|\leq k}\ \max_{x\in[-1,1]^{n}}\left\|(\partial^{\mathbf{p}}f)\left(x)\right\|_{2}.\] We translate this definition to the setting of complex-valued functions by letting \[C^{k}\left(\Omega_{n};\mathbb{C}\right):=\left\{f:\Omega_{n}\to\mathbb{C}:\ \widetilde{f}\in C^{k}\left([-1,1]^{2n};\mathbb{R}^{2}\right)\right\}.\] Additionally, as in the setting of open sets, we denote \[\partial^{\mathbf{p}}f:=\varphi_{1}\circ\partial^{\mathbf{p}}\widetilde{f} \circ\varphi_{n}^{-1}\big{|}_{\Omega_{n}}\] for any \(\mathbf{p}\in\mathbb{N}_{0}^{2n}\) with \(\left|\mathbf{p}\right|\leq k\) and \[\left\|f\right\|_{C^{k}(\Omega_{n};\mathbb{C})}:=\max_{\left|\mathbf{p}\right| \leq k}\,\max_{z\in\Omega_{n}}\left|\left(\partial^{\mathbf{p}}f\right)(z) \right|.\] Note that by definition we have \[\left\|f\right\|_{C^{k}(\Omega_{n};\mathbb{C})}=\|\widetilde{f}\|_{C^{k}([-1, 1]^{2n};\mathbb{R}^{2})}.\] For \(f\in C^{1}(U;\mathbb{C})\), where \(U\subseteq\mathbb{C}\) is an open subset of \(\mathbb{C}\), we denote \[\partial_{\mathrm{wirt}}f:=\frac{1}{2}\left(\partial^{(1,0)}f-i\partial^{(0, 1)}f\right),\quad\overline{\partial}_{\mathrm{wirt}}f:=\frac{1}{2}\left( \partial^{(1,0)}f+i\partial^{(0,1)}f\right).\] These operators are called the _Wirtinger derivatives_. They have the following properties that we are going to use, which can be found for example in [16, E. 1a]: 1. \(\partial_{\mathrm{wirt}}\) and \(\overline{\partial}_{\mathrm{wirt}}\) are both \(\mathbb{C}\)-linear operators on the set \(C^{1}(U;\mathbb{C})\). 2. \(f\) is complex-differentiable in \(z\in U\) iff \(\overline{\partial}_{\mathrm{wirt}}f(z)=0\) and in this case the equality \[\partial_{\mathrm{wirt}}f(z)=f^{\prime}(z)\] holds true, with \(f^{\prime}(z)\) denoting the complex derivative of \(f\) at \(z\). 3. We have the conjugation rules \[\overline{\partial_{\mathrm{wirt}}f}=\overline{\partial}_{\mathrm{wirt}} \overline{f}\text{ and }\overline{\overline{\partial}_{\mathrm{wirt}}f}= \partial_{\mathrm{wirt}}\overline{f}.\] 4. If \(g\in C^{1}(U;\mathbb{C})\), the following product rules for Wirtinger derivatives hold for every \(z\in U\): \[\partial_{\mathrm{wirt}}(f\cdot g)(z) =\partial_{\mathrm{wirt}}f(z)\cdot g(z)+f(z)\cdot\partial_{ \mathrm{wirt}}g(z),\] \[\overline{\partial}_{\mathrm{wirt}}(f\cdot g)(z) =\overline{\partial}_{\mathrm{wirt}}f(z)\cdot g(z)+f(z)\cdot \overline{\partial}_{\mathrm{wirt}}g(z).\] 5. If \(V\subseteq\mathbb{C}\) is an open set and \(g\in C^{1}(V;\mathbb{C})\) with \(g(V)\subseteq U\), then the following chain rules for Wirtinger derivatives hold true: \[\partial_{\mathrm{wirt}}(f\circ g) =\left[\left(\partial_{\mathrm{wirt}}f\right)\circ g\right]\cdot \partial_{\mathrm{wirt}}g+\left[\left(\overline{\partial}_{\mathrm{wirt}}f \right)\circ g\right]\cdot\partial_{\mathrm{wirt}}\overline{g},\] \[\overline{\partial}_{\mathrm{wirt}}(f\circ g) =\left[\left(\partial_{\mathrm{wirt}}f\right)\circ g\right]\cdot \overline{\partial}_{\mathrm{wirt}}g+\left[\left(\overline{\partial}_{\mathrm{ wirt}}f\right)\circ g\right]\cdot\overline{\partial}_{\mathrm{wirt}}\overline{g}.\] 6. If \(f\in C^{2}(U;\mathbb{C})\) then we have \[\Delta f(z)=4\left(\partial_{\mathrm{wirt}}\overline{\partial}_{\mathrm{wirt}} f\right)(z)\] for every \(z\in U\), with \(\Delta\) denoting the usual Laplace-Operator \(\Delta=\partial^{(2,0)}+\partial^{(0,2)}\) (cf. [6, equation (1.7)]). A function \(f\in C^{\infty}(U;\mathbb{C})\) is called _polyharmonic_ (on \(U\) with an open set \(U\subseteq\mathbb{C}\)) if there is \(k\in\mathbb{N}\) such that \(\Delta^{k}\left(f\right)\equiv 0\). Furthermore, we can also define partial Wirtinger derivatives of functions in multiple variables, meaning \(f\in C^{1}(U;\mathbb{C})\), where \(U\subseteq\mathbb{C}^{n}\) is an open set. The formal definition of the partial Wirtinger derivative with respect to the \(j\)-th variable is \[\partial_{\mathrm{wirt}}^{e_{j}}f:=\frac{1}{2}\left(\partial^{e_{j}}f-i\partial ^{e_{n+j}}f\right),\quad\overline{\partial}_{\mathrm{wirt}}^{e_{j}}f:=\frac{1} {2}\left(\partial^{e_{j}}f+i\partial^{e_{n+j}}f\right).\] It is easy to see that since Wirtinger derivatives are just a complex linear combination of real derivatives, the symmetry of derivatives also holds in the case of Wirtinger derivatives, i.e., all Wirtinger derivatives commute for functions of sufficient regularity. Thus, we introduce the following notation: For an open set \(U\subseteq\mathbb{C}^{n}\), a function \(f\in C^{k}(U;\mathbb{C})\) and a multi-index \(\boldsymbol{\ell}\in\mathbb{N}_{0}^{n}\) with \(\left|\boldsymbol{\ell}\right|\leq k\) we write \(\partial_{\mathrm{wirt}}^{\boldsymbol{\ell}}f\) and \(\overline{\partial}_{\mathrm{wirt}}^{\boldsymbol{\ell}}f\) for the iterated Wirtinger derivatives according to the multi-index \(\boldsymbol{\ell}\). **Proposition 1.6**.: _Let \(U\subseteq\mathbb{C}^{n}\) be an open set and \(f\in C^{k}(U;\mathbb{C})\). Then for any multi-indices \(\boldsymbol{m},\boldsymbol{\ell}\in\mathbb{N}_{0}^{n}\) with \(\left|\boldsymbol{m}+\boldsymbol{\ell}\right|\leq k\) we have the representation_ \[\partial_{\mathrm{wirt}}^{\boldsymbol{m}}\overline{\partial}_{\mathrm{wirt}}^{ \boldsymbol{\ell}}f(a)=\sum_{\begin{subarray}{c}\boldsymbol{p}=(\boldsymbol{p}^{ \prime},\boldsymbol{p}^{\prime\prime})\in\mathbb{N}_{0}^{2n}\\ \boldsymbol{p}^{\prime}+\boldsymbol{p}^{\prime\prime}=\boldsymbol{m}+\boldsymbol {\ell}\end{subarray}}b_{\boldsymbol{p}}\left(\partial^{\mathbf{p}}f\right)(a) \quad\forall a\in U\] _with complex numbers \(b_{\boldsymbol{p}}\in\mathbb{C}\) only depending on \(\boldsymbol{p},\boldsymbol{m}\) and \(\boldsymbol{\ell}\) and writing \(\boldsymbol{p}=(\boldsymbol{p}^{\prime},\boldsymbol{p}^{\prime\prime})\) for the concatenation of the multi-indices \(\boldsymbol{p}^{\prime}\) and \(\boldsymbol{p}^{\prime\prime}\in\mathbb{N}_{0}^{n}\). In particular, the coefficients do not depend on \(f\)._ Proof.: The proof is by induction over \(\mathbf{m}\) and \(\boldsymbol{\ell}\). We first assume \(\mathbf{m}=0\) and show the claim for all \(\boldsymbol{\ell}\in\mathbb{N}_{0}^{n}\) with \(|\boldsymbol{\ell}|\leq k\). In the case \(\boldsymbol{\ell}=0\) there is nothing to show, so we assume the claim to be true for fixed \(\boldsymbol{\ell}\in\mathbb{N}_{0}^{n}\) with \(|\boldsymbol{\ell}|<k\) and take \(j\in\{1,...,n\}\) arbitrarily. Then we get \[\overline{\partial}_{\mathrm{wirt}}^{\boldsymbol{\ell}+e_{j}}f(a) =\overline{\partial}_{\mathrm{wirt}}^{ej}\overline{\partial}_{ \mathrm{wirt}}^{\boldsymbol{\ell}}f(a)\overset{\mathrm{IH}}{=}\sum_{ \begin{subarray}{c}\mathbf{p}=(\boldsymbol{p}^{\prime},\boldsymbol{p}^{\prime \prime})\in\mathbb{N}_{0}^{2n}\\ \mathbf{p}^{\prime}+\boldsymbol{p}^{\prime\prime}=\boldsymbol{\ell}\end{subarray}} b_{\boldsymbol{p}}\overline{\partial}_{\mathrm{wirt}}^{ej}\left(\partial^{ \mathbf{p}}f\right)(a)\] \[=\sum_{\begin{subarray}{c}\mathbf{p}=(\mathbf{p}^{\prime}, \mathbf{p}^{\prime\prime})\in\mathbb{N}_{0}^{2n}\\ \mathbf{p}^{\prime}+\mathbf{p}^{\prime\prime}=\boldsymbol{\ell}\end{subarray}} b_{\boldsymbol{p}}\left(\partial^{(\mathbf{p}^{\prime}+e_{j},\mathbf{p}^{\prime \prime})}f\right)(a)+\frac{ib_{\mathbf{p}}}{2}\left(\partial^{(\mathbf{p}^{ \prime},\mathbf{p}^{\prime\prime}+e_{j})}f\right)(a)\] \[=\sum_{\begin{subarray}{c}\mathbf{p}=(\mathbf{p}^{\prime}, \mathbf{p}^{\prime\prime})\in\mathbb{N}_{0}^{2n}\\ \mathbf{p}^{\prime}+\mathbf{p}^{\prime\prime}=\boldsymbol{\ell}+e_{j}\end{subarray}} b_{\boldsymbol{p}}\left(\partial^{\mathbf{p}}f\right)(a),\] as was to be shown. Since we have shown the case \(\mathbf{m}=0\) we may assume the claim to be true for fixed \(\mathbf{m},\boldsymbol{\ell}\in\mathbb{N}_{0}^{s}\) with \(|\mathbf{m}+\boldsymbol{\ell}|<k\). Then we deduce \[\partial_{\mathrm{wirt}}^{\mathbf{m}+e_{j}}\overline{\partial}_{ \mathrm{wirt}}^{\boldsymbol{\ell}}f(a) =\partial_{\mathrm{wirt}}^{ej}\partial_{\mathrm{wirt}}^{\boldsymbol {\mathsf{m}}}\overline{\partial}_{\mathrm{wirt}}^{\boldsymbol{\ell}}f(a) \overset{\mathrm{IH}}{=}\sum_{\begin{subarray}{c}\mathbf{p}=(\mathbf{p}^{ \prime},\mathbf{p}^{\prime\prime})\in\mathbb{N}_{0}^{2n}\\ \mathbf{p}^{\prime}+\mathbf{p}^{\prime\prime}=\mathbf{m}+\boldsymbol{\ell} \end{subarray}}b_{\boldsymbol{p}}\partial_{\mathrm{wirt}}^{ej}\left( \partial^{\mathbf{p}}f\right)(a)\] \[=\sum_{\begin{subarray}{c}\mathbf{p}=(\mathbf{p}^{\prime}, \mathbf{p}^{\prime\prime})\in\mathbb{N}_{0}^{2n}\\ \mathbf{p}^{\prime}+\mathbf{p}^{\prime\prime}=\mathbf{m}+\boldsymbol{\ell}+e_{j} \end{subarray}}\frac{b_{\boldsymbol{p}}}{2}\left(\partial^{(\mathbf{p}^{\prime }+e_{j},\mathbf{p}^{\prime\prime})}f\right)(a)-\frac{ib_{\mathbf{p}}}{2}\left( \partial^{(\mathbf{p}^{\prime},\mathbf{p}^{\prime\prime}+e_{j})}f\right)(a)\] \[=\sum_{\begin{subarray}{c}\mathbf{p}=(\mathbf{p}^{\prime}, \mathbf{p}^{\prime})^{\prime}\in\mathbb{N}_{0}^{2n}\\ \mathbf{p}^{\prime}+\mathbf{p}^{\prime\prime}=\mathbf{m}+\boldsymbol{\ell}+e_{j} \end{subarray}}b_{\boldsymbol{p}}\left(\partial^{\mathbf{p}}f\right)(a).\] Hence the claim follows using the principle of induction. Proposition 1.6 is technical but crucial: In the course of the paper it will be useful to be able to approximate Wirtinger derivatives of certain functions. In fact, however, we will approximate real derivatives and take advantage of the fact that Wirtinger derivatives are linear combinations of these. ### Structure of the Paper The work is divided into five main sections (excluding the introduction) and the appendix. In Section 2 we introduce the class of admissible activation functions \(\phi:\mathbb{C}\to\mathbb{C}\). The definition is stated in a very general way but we show that every function which is smooth and non-polyharmonic on a non-empty open subset of \(\mathbb{C}\) is admissible. Section 3 is devoted to the approximation of complex polynomials in \(z\) and \(\overline{z}\) and in particular to the proof of Theorem 1.1. An extension of the Mean Value Theorem to multivariate functions using divided differences, which is important for proving the theorem, is proven in the appendix. In Section 4, we consider general \(C^{k}\)-functions on the complex unit cube \(\Omega_{n}\) and prove Theorem 1.2. Essential to the proof is a quantitative approximation statement for \(C^{k}\)-functions using linear combinations of Chebyshev polynomials. This approximation is based on Fourier analytic methods, which are well-known in principle. Nevertheless, we discuss these methods in the appendix in order to make the paper more self-contained. Furthermore, Section 5 deals with concrete examples of admissible functions and in particular proves the admissibility of the modReLU and the complex cardioid function, which are the two most commonly used activation functions in applications of complex-valued neural networks. Finally, Section 6 deals with the optimality of the proven approximation statement for \(C^{k}\)-functions and in particular proves Theorems 1.3 - 1.5. ## 2. Admissibility of Activation Functions In this section we discuss the notion of admissibility of activation functions \(\phi:\mathbb{C}\to\mathbb{C}\), i.e., the sufficient conditions that a function has to fulfill in order to be suitable for the approximation results established in Sections 3 and 4. We are going to define a slightly weaker notion of admissibility than that the function is smooth and non-polyharmonic on a non-empty open subset of \(\mathbb{C}\). Let us recall that in [22] the sufficient condition on an activation function \(\phi:\mathbb{R}\to\mathbb{R}\) is that it is smooth on an open interval with one point in the interval where no derivative vanishes. In fact, the proofs in [22] show that it is sufficient to assume that for every \(M\in\mathbb{N}\) there is an open set \(\varnothing\neq U\subseteq\mathbb{R}\) such that \(\phi\big{|}_{U}\in C^{M}\left(U;\mathbb{R}\right)\) with a point \(b\in U\) satisfying \[f^{(n)}(b)\neq 0\text{ for all }n\leq M.\] In the case of complex-valued neural networks it turns out that the usual real derivative has to be replaced by mixed Wirtinger derivatives of the form \(\partial_{\text{virt}}^{m}\overline{\partial}_{\text{virt}}^{\ell}\). This leads to the following definition. **Definition 2.1**.: Let \(\phi:\mathbb{C}\to\mathbb{C}\) and \(M\in\mathbb{N}_{0}\). We call \(\phi\)_\(M\)-admissible_ (in \(b\in\mathbb{C}\)) if there is an open set \(U\subseteq\mathbb{C}\) with \(b\in U\) and \(\phi\big{|}_{U}\in C^{2M}(U;\mathbb{C})\) such that \[\partial_{\text{virt}}^{m}\overline{\partial}_{\text{virt}}^{\ell}\phi(b)\neq 0 \text{ for all }m,\ell\leq M.\] In order to approximate complex polynomials in \(z\) and \(\overline{z}\) with bounded degree it will be enough to assume \(M\)-admissibility of an activation function for a certain \(M\in\mathbb{N}\). However, if one wants to approximate arbitrary \(C^{k}\)-functions one needs the following definition. **Definition 2.2**.: A function \(\phi:\mathbb{C}\to\mathbb{C}\) is called _admissible_ if it is \(M\)-admissible for every \(M\in\mathbb{N}_{0}\). Note that for an admissible function there does not necessarily have to be an open set \(\varnothing\neq U\subseteq\mathbb{C}\) such that the function is smooth on \(U\). However, if we assume smoothness we can derive the following elegant result. **Theorem 2.3**.: _Let \(\phi:\mathbb{C}\to\mathbb{C}\), \(\varnothing\neq U\subseteq\mathbb{C}\) an open set and \(\phi\big{|}_{U}\in C^{\infty}(U;\mathbb{C})\). Then the following are equivalent:_ 1. \(\phi\big{|}_{U}\) _is not polyharmonic._ 2. _For every_ \(M\in\mathbb{N}_{0}\) _there exists_ \(z_{M}\in U\) _such that_ \(\phi\) _is_ \(M\)_-admissible at_ \(z_{M}\)_._ _In particular, in both cases \(\phi\) is admissible._ Proof.: (1) \(\Rightarrow\) (2): Let \(M\in\mathbb{N}_{0}\). Since \(\phi\big{|}_{U}\) is not polyharmonic we can pick \(z\in U\) with \(\Delta^{M}\phi(z)\neq 0\). By continuity we can choose \(\delta>0\) with \(B_{\delta}(z)\subseteq U\) and \(\Delta^{M}\phi(w)\neq 0\) for all \(w\in B_{\delta}(z)\). For all \(m,\ell\in\mathbb{N}_{0}\) let \[A_{m,\ell}:=\left\{w\in B_{\delta}(z):\ \partial_{\text{virt}}^{m}\overline{ \partial}_{\text{virt}}^{\ell}\phi(w)=0\right\}\] and assume towards a contradiction that \[\bigcup_{m,\ell\leq M}A_{m,\ell}=B_{\delta}(z).\] By [2, Corollary 3.35], \(B_{\delta}(z)\) with its usual topology is completely metrizable. By continuity of \(\partial_{\text{virt}}^{m}\overline{\partial}_{\text{virt}}^{\ell}\phi\) the sets \(A_{m,\ell}\) are closed in \(B_{\delta}(z)\). Hence, using the Baire category theorem [2, Theorems 3.46 and 3.47], there are \(m,\ell\in\mathbb{N}_{0}\) with \(m,\ell\leq M\), \(z^{\prime}\in A_{m,\ell}\) and \(\varepsilon>0\) such that \[B_{\varepsilon}\left(z^{\prime}\right)\subseteq A_{m,\ell}\subseteq B_{\delta} (z).\] But thanks to \(\Delta^{M}\phi=4^{M}\partial_{\text{virt}}^{M}\overline{\partial}_{\text{virt}} ^{M}\phi=4^{M}\partial_{\text{virt}}^{M-\ell}\overline{\partial}_{\text{virt}} ^{M-m}\partial_{\text{virt}}^{\ell}\overline{\partial}_{\text{virt}}^{m}\phi\) (see (6) on p. 6), this directly implies \(\Delta^{M}\phi\equiv 0\) on \(B_{\varepsilon}\left(z^{\prime}\right)\) in contradiction to the choice of \(B_{\delta}(z)\). (2) \(\Rightarrow\) (1): If \(\phi\big{|}_{U}\) is polyharmonic there is \(M\in\mathbb{N}_{0}\) with \[\partial_{\text{virt}}^{M}\overline{\partial}_{\text{virt}}^{M}\phi(z)=0\] for every \(z\in U\), which contradicts (2). Theorem 2.3 justifies the formulation of Theorems 1.1 and 1.2, since it shows that every activation function \(\phi\) satisfying the assumptions of the theorem is \(M\)-admissible for all \(M\in\mathbb{N}_{0}\). _Remark 2.4_.: In the case of a real function it follows that if the function is smooth and non-polynomial on an open interval, there has to be a point at which no derivative vanishes, see [12, p. 53]. It is an interesting and currently unsolved question if a similar statement also holds true in the case of complex functions and Wirtinger derivatives, i.e., if the following holds: Let \(z\in\mathbb{C},\ r>0\) and \(\phi\in C^{\infty}(B_{r}(z);\mathbb{C})\) be non-polyharmonic. Then there exists \(b\in B_{r}(z)\) such that \[\partial_{\mathrm{wirt}}^{m}\overline{\partial}_{\mathrm{wirt}}^{\ell}\phi(b) \neq 0\ \text{for all}\ m,\ell\in\mathbb{N}_{0}.\] ## 3. Approximation of Polynomials The following section is dedicated to proving Theorem 1.1. We are going to show that it is possible to approximate complex polynomials in \(z\) and \(\overline{z}\) arbitrarily well on \(\Omega_{n}\) using shallow complex-valued neural networks. The following theorem can be seen as a generalization of the classical mean-value theorem. Its proof is deferred to Theorem A.5 in the appendix. **Theorem 3.1**.: _Let \(f:\mathbb{R}^{s}\to\mathbb{R}\) and \(k\in\mathbb{N}_{0},r>0\), such that \(f\big{|}_{(-r,r)^{s}}\in C^{k}\left((-r,r)^{s};\mathbb{R}\right)\). For \(\boldsymbol{p}\in\mathbb{N}_{0}^{s}\) with \(|\boldsymbol{p}|\leq k\) and \(h>0\) let_ \[f_{\boldsymbol{p},h}:=(2h)^{-|\boldsymbol{p}|}\sum_{0\leq r\leq\boldsymbol{p}} (-1)^{|\boldsymbol{p}|-|\boldsymbol{r}|}\binom{\boldsymbol{p}}{\boldsymbol{r} }f\left(h(2\boldsymbol{r}-\boldsymbol{p})\right).\] _Let \(m:=\max\limits_{j}\ \boldsymbol{p}_{j}\). Then for \(0<h<\frac{r}{\max\{1,m\}}\) there exists \(\xi\in h[-m,m]^{s}\) satisfying_ \[f_{\boldsymbol{p},h}=\partial^{\boldsymbol{p}}f(\xi).\] The following theorem describes how one can extract monomials of the form \(z^{\boldsymbol{m}}\overline{z}^{\boldsymbol{\ell}}\) by calculating \[\partial_{\mathrm{wirt}}^{\boldsymbol{m}}\overline{\partial}_{\mathrm{wirt}}^{ \boldsymbol{\ell}}\left(w\mapsto\phi\left(w^{T}z+b\right)\right)\] for a \(C^{k}\)-function \(\phi:\mathbb{C}\to\mathbb{C}\). **Theorem 3.2**.: _Let \(\phi:\mathbb{C}\to\mathbb{C}\) and \(\delta>0,\ b\in\mathbb{C},\ k\in\mathbb{N}_{0}\), such that \(\phi\big{|}_{B_{\delta}(b)}\in C^{k}\left(B_{\delta}(b);\mathbb{C}\right)\). For fixed \(z\in\Omega_{n}\), where we recall that \(\Omega_{n}=[-1,1]^{n}+i[-1,1]^{n}\), we consider the map_ \[\phi_{z}:\quad B_{\frac{\delta}{\sqrt{2n}}}(0)\to\mathbb{C},\quad w\mapsto \phi\left(w^{T}z+b\right),\] _which is in \(C^{k}\) since for \(w\in B_{\frac{\delta}{\sqrt{2n}}}(0)\subseteq\mathbb{C}\) we have_ \[\big{|}w^{T}z\big{|}\leq\|w\|_{2}\cdot\|z\|_{2}<\frac{\delta}{\sqrt{2n}}\cdot \sqrt{2n}=\delta.\] _For all multi-indices \(\boldsymbol{m},\boldsymbol{\ell}\in\mathbb{N}_{0}^{n}\) with \(|\boldsymbol{m}+\boldsymbol{\ell}|\leq k\) we have_ \[\partial_{\mathrm{wirt}}^{\boldsymbol{m}}\overline{\partial}_{\mathrm{wirt}}^ {\boldsymbol{\ell}}\phi_{z}(w)=z^{\boldsymbol{m}}\overline{z}^{\boldsymbol {\ell}}\cdot\left(\partial_{\mathrm{wirt}}^{|\boldsymbol{m}|}\overline{ \partial}_{\mathrm{wirt}}^{|\boldsymbol{\ell}|}\phi\right)\left(w^{T}z+b\right)\] _for all \(w\in B_{\frac{\delta}{\sqrt{2n}}}(0)\)._ Proof.: First we prove the statement \[\overline{\partial}_{\mathrm{wirt}}^{\boldsymbol{\ell}}\phi_{z}(w)=\overline{ z}^{\boldsymbol{\ell}}\cdot(\overline{\partial}_{\mathrm{wirt}}^{| \boldsymbol{\ell}|}\phi)\left(w^{T}z+b\right)\quad\text{for all}\ w\in B_{ \frac{\delta}{\sqrt{2n}}}(0) \tag{3.1}\] by induction over \(0\leq|\boldsymbol{\ell}|\leq k\). The case \(\boldsymbol{\ell}=0\) is trivial. Assuming that (3.1) holds for fixed \(\boldsymbol{\ell}\in\mathbb{N}_{0}^{n}\) with \(|\boldsymbol{\ell}|<k\), we want to show \[\overline{\partial}_{\mathrm{wirt}}^{\ell+e_{j}}\phi_{z}(w)=\overline{z}^{ \boldsymbol{\ell}+e_{j}}\cdot\left(\partial_{\mathrm{wirt}}^{|\boldsymbol{ \ell}|+1}\phi\right)\left(w^{T}z+b\right) \tag{3.2}\] for all \(w\in B_{\frac{\delta}{\sqrt{2n}}}(0)\), where \(j\in\{1,...,n\}\) is chosen arbitrarily. Firstly we get \[\overline{\partial}_{\text{wirt}}^{\mathbf{\ell}+e_{j}}\phi_{z}(w) =\overline{\partial}_{\text{wirt}}^{e_{j}}\overline{\partial}_{ \text{wirt}}^{\mathbf{\ell}}\phi_{z}(w)\overset{\text{induction}}{=}\overline{ \partial}_{\text{wirt}}^{e_{j}}\left[w\mapsto\overline{z}^{\mathbf{\ell}}\cdot \left(\partial_{\text{wirt}}^{|\mathbf{\ell}|}\phi\right)\left(w^{T}z+b\right)\right]\] \[=\overline{z}^{\mathbf{\ell}}\overline{\partial}_{\text{wirt}}^{e_{j }}\left[w\mapsto\left(\overline{\partial}_{\text{wirt}}^{|\mathbf{\ell}|}\phi \right)\left(w^{T}z+b\right)\right].\] Applying the chain rule for Wirtinger derivatives and using \[\overline{\partial}_{\text{wirt}}^{e_{j}}\left[w\mapsto w^{T}z+b\right]=0\] we see \[\overline{\partial}_{\text{wirt}}^{e_{j}}\left[w\mapsto\left( \overline{\partial}_{\text{wirt}}^{|\mathbf{\ell}|}\phi\right)\left(w^{T}z+b \right)\right] =\left(\partial_{\text{wirt}}\overline{\partial}_{\text{wirt}}^{| \mathbf{\ell}|}\phi\right)\left(w^{T}z+b\right)\cdot\overline{\partial}_{\text{ wirt}}^{e_{j}}\left[w\mapsto w^{T}z+b\right]\] \[\quad+\left(\overline{\partial}_{\text{wirt}}^{|\mathbf{\ell}|+1} \phi\right)\left(w^{T}z+b\right)\cdot\overline{\partial}_{\text{wirt}}^{e_{j }}\left[w\mapsto\overline{w^{T}z+b}\right]\] \[=\overline{z}^{e_{j}}\cdot\left(\overline{\partial}_{\text{wirt}} ^{|\mathbf{\ell}|+1}\phi\right)\left(w^{T}z+b\right),\] using the fact that \(w_{j}\mapsto w^{T}z+b\) is holomorphic and hence \[\overline{\partial}_{\text{wirt}}^{e_{j}}\left[w\mapsto w^{T}z+b\right]=0\quad \text{and}\quad\partial_{\text{wirt}}^{e_{j}}\left[w\mapsto w^{T}z+b\right]=z _{j}.\] Thus, we have proven (3.2) and induction yields (3.1). It remains to show the full claim. We use induction over \(|\mathbf{m}|\) and note that the case \(\mathbf{m}=0\) has just been shown. We assume that the claim holds true for fixed \(\mathbf{m}\in\mathbb{N}_{0}^{n}\) with \(|\mathbf{m}+\mathbf{\ell}|<k\) and choose \(j\in\{1,...,n\}\). Thus, we get \[\partial_{\text{wirt}}^{\mathbf{m}+e_{j}}\overline{\partial}_{ \text{wirt}}^{\mathbf{\ell}}\phi_{z}(w) =\partial_{\text{wirt}}^{e_{j}}\partial_{\text{wirt}}^{\mathbf{\mathrm{ m}}}\overline{\partial}_{\text{wirt}}^{\mathbf{\ell}}\phi_{z}(w)\overset{\text{IH}}{=} \partial_{\text{wirt}}^{e_{j}}\left(w\mapsto z^{\mathbf{m}}\overline{z}^{\mathbf{ \ell}}\cdot\left(\partial_{\text{wirt}}^{|\mathbf{\mathrm{m}}|}\overline{\partial}_ {\text{wirt}}^{|\mathbf{\ell}|}\phi\right)\left(w^{T}z+b\right)\right)\] \[=z^{\mathbf{m}}\overline{z}^{\mathbf{\ell}}\cdot\partial_{\text{wirt }}^{e_{j}}\left[w\mapsto\left(\partial_{\text{wirt}}^{|\mathbf{\mathrm{m}}|} \overline{\partial}_{\text{wirt}}^{|\mathbf{\ell}|}\phi\right)\left(w^{T}z+b\right) \right].\] Using the chain rule again, we calculate \[\partial_{\text{wirt}}^{e_{j}}\left[w\mapsto\left(\partial_{ \text{wirt}}^{|\mathbf{m}|}\overline{\partial}_{\text{wirt}}^{|\mathbf{\ell}|} \phi\right)\left(w^{T}z+b\right)\right]\] \[=\left(\partial_{\text{wirt}}^{|\mathbf{m}|+1}\overline{\partial}_ {\text{wirt}}^{|\mathbf{\ell}|}\phi\right)\left(w^{T}z+b\right)\cdot\partial_{ \text{wirt}}^{e_{j}}\left[w\mapsto w^{T}z+b\right]\] \[\quad+\left(\partial_{\text{wirt}}^{|\mathbf{m}|}\overline{ \partial}_{\text{wirt}}^{|\mathbf{\ell}|+1}\phi\right)\left(w^{T}z+b\right)\cdot \partial_{\text{wirt}}^{e_{j}}\left[w\mapsto\overline{w^{T}z+b}\right]\] \[=z^{e_{j}}\cdot\left(\partial_{\text{wirt}}^{|\mathbf{m}|+1} \overline{\partial}_{\text{wirt}}^{|\mathbf{\ell}|}\phi\right)\left(w^{T}z+b \right)\cdot\overline{\partial}_{\text{wirt}}^{e_{j}}\left[w\mapsto w^{T}z+b\right]\] \[=z^{e_{j}}\cdot\left(\partial_{\text{wirt}}^{|\mathbf{m}|+1} \overline{\partial}_{\text{wirt}}^{|\mathbf{\ell}|}\phi\right)\left(w^{T}z+b \right).\] By induction, this proves the claim. The preceding result shows that monomials can be represented by Wirtinger derivatives, which in turn are linear combinations of "usual" partial derivatives, see Proposition 1.6. We now show that these partial derivatives can be approximated by shallow neural networks. **Lemma 3.3**.: _Let \(\phi:\mathbb{C}\to\mathbb{C}\) and \(\delta>0,\ b\in\mathbb{C},\ k\in\mathbb{N}_{0}\), such that \(\phi\big{|}_{B_{\delta}(b)}\in C^{k}\left(B_{\delta}(b);\mathbb{C}\right)\). Let \(m,n\in\mathbb{N}\) and \(\varepsilon>0\). For \(\mathbf{p}\in\mathbb{N}_{0}^{2n},h>0\) and \(z\in\Omega_{n}\) we write_ \[\Phi_{\mathbf{p},h}(z) :=(2h)^{-|\mathbf{p}|}\sum_{0\leq r\leq\mathbf{p}}(-1)^{|\mathbf{p}|-|r|}\binom{ \mathbf{p}}{\mathbf{r}}\left(\phi_{z}\circ\varphi_{n}\right)\left(h(2\mathbf{r}-\mathbf{p})\right)\] \[=(2h)^{-|\mathbf{p}|}\sum_{0\leq r\leq\mathbf{p}}(-1)^{|\mathbf{p}|-|r|}\binom{ \mathbf{p}}{\mathbf{r}}\phi\left(\left[\varphi_{n}\left(h(2\mathbf{r}-\mathbf{p})\right)\right]^ {T}\cdot z+b\right),\] _where \(\phi_{z}\) is as introduced in Theorem 3.2 and \(\varphi_{n}\) is as in (1.2) on page 5. Furthermore, let_ \[\phi_{\boldsymbol{p}}:\quad\Omega_{n}\times B_{\frac{\delta}{\sqrt{2n}}}(0) \rightarrow\mathbb{C},\quad(z,w)\mapsto\partial^{\boldsymbol{p}}\phi_{z}(w).\] _Then there exists \(h^{*}>0\) such that_ \[\|\Phi_{\boldsymbol{p},h}-\phi_{\boldsymbol{p}}(\cdot,0)\|_{L^{\infty}(\Omega_ {n};\mathbb{C})}\leq\varepsilon\] _for all \(\boldsymbol{p}\in\mathbb{N}_{0}^{2n}\) with \(|\boldsymbol{p}|\leq k\) and \(\boldsymbol{p}_{j}\leq m\) for all \(j\in\{1,...,2n\}\) and \(h\in(0,h^{*})\)._ Proof.: Fix \(\boldsymbol{p}\in\mathbb{N}_{0}^{2n}\) with \(|\boldsymbol{p}|\leq k\) and \(\boldsymbol{p}_{j}\leq m\) for all \(j\in\{1,...,2n\}\). The map \[B_{\sqrt{2n}}(0)\times B_{\delta/\sqrt{2n}}(0)\rightarrow\mathbb{C},\quad(z, w)\mapsto\phi\left(w^{T}z+b\right)\] is in \(C^{k}\) since \[\left|w^{T}z\right|\leq\|w\|\cdot\|z\|<\frac{\delta}{\sqrt{2n}}\cdot\sqrt{2n} =\delta.\] Therefore, the map \[B_{\sqrt{2n}}(0)\times B_{\delta/\sqrt{2n}}(0)\rightarrow\mathbb{C},\quad(z, w)\mapsto\partial^{\boldsymbol{p}}\phi_{z}(w)\] is continuous and in particular uniformly continuous on the compact set \[\Omega_{n}\times\overline{B_{\delta/2n}}(0)\subseteq B_{\sqrt{2n}}(0)\times B _{\delta/\sqrt{2n}}(0).\] Hence, there exists \(h_{\boldsymbol{p}}\in(0,\frac{\delta}{2mn})\), such that \[\left|\phi_{\boldsymbol{p}}(z,\xi)-\phi_{\boldsymbol{p}}(z,0)\right|\leq\frac {\varepsilon}{\sqrt{2}}\] for all \(\xi\in\varphi_{n}\left(h\cdot[-m,m]^{2n}\right),\ h\in(0,h_{\boldsymbol{p}})\) and \(z\in\Omega_{n}\). Now fix such an \(h\in(0,h_{\boldsymbol{p}})\) and \(z\in\Omega_{n}\). Applying Theorem 3.1 to both components of \(\left(\varphi_{1}^{-1}\circ\Phi_{\boldsymbol{p},h}\right)(z)\) and \(\varphi_{1}^{-1}\circ\phi_{z}\circ\varphi_{n}|_{\left(-\frac{\delta}{2n},\frac {\delta}{2n}\right)^{2n}}\) separately yields the existence of two real vectors \(\xi_{1},\ \xi_{2}\in h\cdot[-m,m]^{2n}\), such that \[\left(\varphi_{1}^{-1}\circ\Phi_{\boldsymbol{p},h}(z)\right)_{1}=\left[ \partial^{\boldsymbol{p}}\left(\varphi_{1}^{-1}\circ\phi_{z}\circ\varphi_{n} \right)(\xi_{1})\right]_{1},\ \left(\varphi_{1}^{-1}\circ\Phi_{\boldsymbol{p},h}(z)\right)_{2}=\left[ \partial^{\boldsymbol{p}}\left(\varphi_{1}^{-1}\circ\phi_{z}\circ\varphi_{n} \right)(\xi_{2})\right]_{2}.\] Rewriting this yields \[\operatorname{Re}\left(\Phi_{\boldsymbol{p},h}(z)\right)=\operatorname{Re} \left(\phi_{\boldsymbol{p}}(z,\varphi_{n}\left(\xi_{1}\right))\right)\quad \text{and}\quad\operatorname{Im}\left(\Phi_{\boldsymbol{p},h}(z)\right)= \operatorname{Im}\left(\phi_{\boldsymbol{p}}(z,\varphi_{n}\left(\xi_{2} \right))\right).\] Using this property we deduce \[\left|\operatorname{Re}\left(\Phi_{\boldsymbol{p},h}(z)-\phi_{\boldsymbol{p}} (z,0)\right)\right|=\left|\operatorname{Re}\left(\phi_{\boldsymbol{p}}(z, \varphi_{n}\left(\xi_{1}\right))-\phi_{\boldsymbol{p}}(z,0)\right)\right|\leq \left|\phi_{\boldsymbol{p}}(z,\varphi_{n}\left(\xi_{1}\right))-\phi_{ \boldsymbol{p}}(z,0)\right|\leq\frac{\varepsilon}{\sqrt{2}}\] and analogously for the imaginary part. Since \(z\in\Omega_{n}\) and \(h\in(0,h_{\boldsymbol{p}})\) have been chosen arbitrarily we get the claim by choosing \[h^{*}:=\min\left\{h_{\boldsymbol{p}}:\ \boldsymbol{p}\in\mathbb{N}_{0}^{2n}\text{ with }| \boldsymbol{p}|\leq k\text{ and }\max_{j}\ \boldsymbol{p}_{j}\leq m\right\}.\qed\] **Definition 3.4**.: For \(m\in\mathbb{N}_{0}\) we define \[\mathcal{P}_{m}:=\left\{\mathbb{C}^{n}\rightarrow\mathbb{C},\ z\mapsto\sum_{ \boldsymbol{m}\leq m}\sum_{\boldsymbol{\ell}\leq m}a_{\boldsymbol{m},\boldsymbol {\ell}}z^{\boldsymbol{m}}\overline{z}^{\boldsymbol{\ell}}:\ a_{\boldsymbol{m}, \boldsymbol{\ell}}\in\mathbb{C}\right\}\] as the space of functions from \(\mathbb{C}^{n}\) to \(\mathbb{C}\) that are represented by a complex polynomial in \(z\) and \(\overline{z}\) of coordinatewise degree at most \(m\). We can now state and prove the first major result of this paper, which already appeared in a slightly simplified form in the introduction as Theorem 1.1. Specifically, we are going to show that it is possible to approximate complex polynomials in \(z\) and \(\overline{z}\) of coordinatewise degree of at most \(m\) using shallow complex-valued neural networks with \((4m+1)^{2n}\) neurons in the hidden layer of the network, provided that the activation function is \(mn\)-admissible. **Theorem 3.5**.: _Let \(m,n\in\mathbb{N}\), \(\varepsilon>0\) and let \(\phi:\mathbb{C}\to\mathbb{C}\) be \(mn\)-admissible in \(b\in\mathbb{C}\). Let \(\mathcal{P}^{\prime}\subseteq\mathcal{P}_{m}\) be bounded and set \(N:=(4m+1)^{2n}\). Then there exist certain coefficients \(\rho_{1},...,\rho_{N}\in\mathbb{C}^{n}\) with the following property: For each polynomial \(p\in\mathcal{P}^{\prime}\) there are coefficients \(\sigma_{1},...,\sigma_{N}\in\mathbb{C}\), such that_ \[\left\|p-f\right\|_{L^{\infty}(\Omega_{n};\mathbb{C})}\leq\varepsilon,\] _where_ \[f:\quad\Omega_{n}\to\mathbb{C},\quad z\mapsto\sum_{j=1}^{N}\sigma_{j}\phi\left( \rho_{j}^{T}z+b\right). \tag{3.3}\] Proof.: Let \(p\in\mathcal{P}^{\prime}\). Fix \(\mathbf{m},\boldsymbol{\ell}\in\mathbb{N}_{0}^{n}\) with \(\mathbf{m},\boldsymbol{\ell}\leq m\). For each \(z\in\Omega_{n}\), using Theorem 3.2, we then have \[z^{\mathbf{m}}\overline{z}^{\boldsymbol{\ell}}=\left[\left(\partial_{\text{ wirt}}^{|\mathbf{m}|}\overline{\partial}_{\text{wirt}}^{|\boldsymbol{\ell}|} \phi\right)(b)\right]^{-1}\partial_{\text{wirt}}^{\boldsymbol{\ell}}\overline {\partial}_{\text{wirt}}^{\boldsymbol{\ell}}\phi_{z}(0)\overset{1.6}{=}\left[ \left(\partial_{\text{wirt}}^{|\mathbf{m}|}\overline{\partial}_{\text{wirt}}^ {|\boldsymbol{\ell}|}\phi\right)(b)\right]^{-1}\cdot\sum_{\begin{subarray}{c }\mathbf{p}^{\prime},\mathbf{p}^{\prime\prime}\in\mathbb{N}_{0}^{n}\\ \mathbf{p}^{\prime}+\mathbf{p}^{\prime\prime}=\mathbf{m}+\boldsymbol{\ell} \end{subarray}}b_{\mathbf{p}^{\prime},\mathbf{p}^{\prime\prime}}\partial^{( \mathbf{p}^{\prime},\mathbf{p}^{\prime\prime})}\phi_{z}(0)\] with suitably chosen complex coefficients \(b_{\mathbf{p}^{\prime},\mathbf{p}^{\prime\prime}}\in\mathbb{C}\) depending only on \(\mathbf{p}^{\prime},\ \mathbf{p}^{\prime\prime},\ \mathbf{m}\) and \(\boldsymbol{\ell}\). Here we used that \(|\mathbf{m}|,\ |\boldsymbol{\ell}|\leq mn\) and that \(\phi\) is \(mn\)-admissible in \(b\). Since \(\mathcal{P}^{\prime}\) is bounded and \(p\in\mathcal{P}^{\prime}\), we can write \[p(z)=\sum_{\begin{subarray}{c}\mathbf{m},\boldsymbol{\ell}\in\mathbb{N}_{0}^{ n}\\ \mathbf{m},\boldsymbol{\ell}\leq m\end{subarray}}a_{\mathbf{m},\boldsymbol{ \ell}^{\geq}\mathbf{m}}\overline{z}^{\boldsymbol{\ell}}\] with \(|a_{\mathbf{m},\boldsymbol{\ell}}|\leq c\) for some constant \(c=c(\mathcal{P}^{\prime})>0\). This easily implies that we can rewrite \(p\) as \[p(z)=\sum_{\mathbf{p}\leq 2m}c_{\mathbf{p}}(p)\partial^{\mathbf{p}}\phi_{z}(0)\] with coefficients \(c_{\mathbf{p}}(p)\in\mathbb{C}\) satisfying \(|c_{\mathbf{p}}(p)|\leq c^{\prime}\) for some constant \(c^{\prime}=c^{\prime}(\phi,b,\mathcal{P}^{\prime},m,n)\). By Lemma 3.3, we choose \(h^{*}>0\), such that \[|\Phi_{\mathbf{p},h^{*}}(z)-\partial^{\mathbf{p}}\phi_{z}(0)|\leq\frac{ \varepsilon}{\sum_{\mathbf{p}\leq 2m}c^{\prime}}\] for all \(z\in\Omega_{n}\) and \(\mathbf{p}\in\mathbb{N}_{0}^{2n}\) with \(\mathbf{p}\leq 2m\) (note that we then automatically have \(|\mathbf{p}|\leq 2mn\) and that \(\phi\big{|}_{B_{\delta}(b)}\in C^{2mn}(B_{\delta}(b);\mathbb{C})\) for some \(\delta>0\)). Furthermore, we can rewrite each function \(\Phi_{\mathbf{p},h^{*}}\) as \[\Phi_{\mathbf{p},h^{*}}(z)=\sum_{\begin{subarray}{c}\boldsymbol{\alpha}\in \mathbb{Z}^{2n}\\ |\boldsymbol{\alpha}|_{j}\leq 2m\ \forall j\end{subarray}}\lambda_{\boldsymbol{ \alpha},\mathbf{p}}\phi(\left[\varphi_{n}\left(h^{*}\boldsymbol{\alpha} \right)\right]^{T}\cdot z+b)\] with suitable coefficients \(\lambda_{\boldsymbol{\alpha},\mathbf{p}}\in\mathbb{C}\). Since the cardinality of the set \[\left\{\boldsymbol{\alpha}\in\mathbb{Z}^{2n}:\ |\boldsymbol{\alpha}_{j}|\leq 2m\ \forall j\right\}\] is \((4m+1)^{2n}\), this can be converted to \[\Phi_{\mathbf{p},h^{*}}(z)=\sum_{j=1}^{N}\lambda_{j,\mathbf{p}}\phi\left(\rho_ {j}^{T}\cdot z+b\right).\] If we then again take any arbitrary polynomial \(p\in\mathcal{P}^{\prime}\), we define \[\theta(z):=\sum_{\mathbf{p}\leq 2m}c_{\mathbf{p}}(p)\cdot\Phi_{\mathbf{p},h^{*}}(z)\] and note \[|\theta(z)-p(z)|\leq\sum_{\mathbf{p}\leq 2m}|c_{\mathbf{p}}(p)|\cdot|\Phi_{ \mathbf{p},h^{*}}(z)-\partial^{\mathbf{p}}\phi_{z}(0)|\leq\varepsilon.\] Since the coefficients \(\rho_{j}\) have been chosen independently from the polynomial \(p\), we can rewrite \(\theta\) so it has the form (3.3). ## 4. Approximation of differentiable functions For any natural number \(\ell\in\mathbb{N}_{0}\), we denote by \(T_{\ell}\) the \(\ell\)-th Chebyshev polynomial, satisfying \[T_{\ell}\left(\cos(x)\right)=\cos(\ell x),\quad x\in\mathbb{R}.\] For a multi-index \(\mathbf{k}\in\mathbb{N}_{0}^{s}\) we define \[T_{\mathbf{k}}(x):=\prod_{j=1}^{s}T_{\mathbf{k}_{j}}\left(x_{j}\right),\quad x \in[-1,1]^{s}.\] We then get the following approximation result. **Theorem 4.1**.: _Let \(s,m\in\mathbb{N}\), \(k\in\mathbb{N}_{0}\). Then there is a constant \(c=c(s,k)>0\) with the following property: For any \(f\in C^{k}\left([-1,1]^{s};\mathbb{R}\right)\) the polynomial \(P\) defined as_ \[P(x):=\sum_{0\leq\mathbf{k}\leq 2m-1}\mathcal{V}_{\mathbf{k}}^{m}(f)\cdot T_{ \mathbf{k}}(x)\] _satisfies_ \[\left\|f-P\right\|_{L^{\infty}\left([-1,1]^{s}\right)}\leq\frac{c}{m^{k}}\cdot \left\|f\right\|_{C^{k}\left([-1,1]^{s};\mathbb{R}\right)}.\] _Here, the maps_ \[C\left([-1,1]^{s};\mathbb{R}\right)\rightarrow\mathbb{R},\quad f\mapsto \mathcal{V}_{\mathbf{k}}^{m}(f)\] _are continuous and linear functionals with respect to the \(L^{\infty}\)-norm. Furthermore, there is a constant \(\widetilde{c}=\widetilde{c}(s)>0\), such that the inequality_ \[\sum_{0\leq\mathbf{k}\leq 2m-1}\left|\mathcal{V}_{\mathbf{k}}^{m}(f)\right| \leq\widetilde{c}\cdot m^{s/2}\cdot\left\|f\right\|_{L^{\infty}\left([-1,1]^{ s}\right)}\] _holds for any \(f\in C\left([-1,1]^{s};\mathbb{R}\right)\)._ The proof of Theorem 4.1 is deferred to the appendix (Theorem B.15). We can now formulate the final result for the approximation of a function \(f\in C^{k}\left(\Omega_{n};\mathbb{C}\right)\), a simplified version of which appeared in the introduction as Theorem 1.2. **Theorem 4.2**.: _Let \(n,k\in\mathbb{N}\). Then there is a constant \(c=c(n,k)>0\) with the following property: For any admissible activation function \(\phi:\mathbb{C}\rightarrow\mathbb{C}\) and for any \(m\in\mathbb{N}\) there are complex vectors \(\rho_{1},...,\rho_{m}\in\mathbb{C}^{n}\) and \(b_{m}\in\mathbb{C}\) with the following property: For any \(f\in C^{k}\left(\Omega_{n};\mathbb{C}\right)\) there are complex numbers \(\sigma_{1}=\sigma_{1}(f),...,\sigma_{m}=\sigma_{m}(f)\in\mathbb{C}\), such that_ \[\left\|f-g\right\|_{L^{\infty}\left(\Omega_{n}\right)}\leq c\cdot m^{-k/(2n)} \cdot\left\|f\right\|_{C^{k}\left(\Omega_{n};\mathbb{C}\right)},\] _where \(g\) is defined as_ \[g:\quad\Omega_{n}\rightarrow\mathbb{C},\quad g(z):=\sum_{j=1}^{m}\sigma_{j} \phi\left(\rho_{j}^{T}z+b_{m}\right).\] _Furthermore, the map \(f\mapsto\sigma_{j}(f)\) is a continuous linear functional with respect to the \(L^{\infty}\)-norm for every \(j\in\{1,...,m\}\)._ Proof.: Choose \(M\in\mathbb{N}\) as the largest integer, such that \((16M-7)^{2n}\leq m\), where we assume without loss of generality that \(9^{2n}\leq m\), which can be done by choosing \(\sigma_{j}=0\) for all \(j\in\{1,...,m\}\) for \(m<9^{2n}\), at the cost of possibly enlarging \(c\). First we note that by the choice of \(M\) the inequality \[m\leq(16M+9)^{2n}\] holds true. Since \(16M+9\leq 25M\), we get \(m\leq 25^{2n}\cdot M^{2n}\) or equivalently \[\frac{m^{1/2n}}{25}\leq M. \tag{4.1}\] According to Theorem 4.1 we choose a constant \(c_{1}=c_{1}(n,k)\) with the property that for any function \(f\in C^{k}\left([-1,1]^{2n};\mathbb{R}\right)\) there exists a polynomial \[P=\sum_{0\leq\mathbf{k}\leq 2M-1}\mathcal{V}_{\mathbf{k}}^{M}(f)\cdot T_{ \mathbf{k}}\] of coordinatewise degree at most \(2M-1\) satisfying \[\left\|f-P\right\|_{L^{\infty}([-1,1]^{2n})}\leq\frac{c_{1}}{M^{k}}\cdot\left\| f\right\|_{C^{k}([-1,1]^{2n};\mathbb{R})}.\] Furthermore, according to Theorem 4.1, we choose a constant \(c_{2}=c_{2}(n)\), such that the inequality \[\sum_{0\leq\mathbf{k}\leq 2M-1}\left|\mathcal{V}_{\mathbf{k}}^{M}(f)\right|\leq c _{2}\cdot M^{n}\left\|f\right\|_{C^{k}([-1,1]^{2n};\mathbb{R})}\] holds for all \(f\in C^{k}\left([-1,1]^{2n}\right)\). The final constant is then defined to be \[c=c(n,k):=\sqrt{2}\cdot 25^{k}\cdot(c_{1}+c_{2})\,.\] Since \(\phi\) is admissible, there exists \(b_{m}\in\mathbb{C}\) such that \(\phi\) is \((4M-2)n\)-admissible in \(b_{m}\). Fix \(\mathbf{k}\leq 2M-1\). Since \(T_{\mathbf{k}}\) is a polynomial of componentwise degree less or equal to \(2M-1\), we have a representation \[\left(T_{\mathbf{k}}\circ\varphi_{n}^{-1}\right)(z)=\sum_{\begin{subarray}{c }\boldsymbol{\ell}^{1},\boldsymbol{\ell}^{2}\in\mathbb{N}_{0}^{\infty}\\ \boldsymbol{\ell}^{1},\boldsymbol{\ell}^{2}\leq 2M-1\end{subarray}}a_{ \boldsymbol{\ell}^{1},\boldsymbol{\ell}^{2}}^{\mathbf{k}}\prod_{t=1}^{n} \operatorname{Re}\left(z_{t}\right)^{\boldsymbol{\ell}^{1}_{t}}\operatorname{ Im}\left(z_{t}\right)^{\boldsymbol{\ell}^{2}_{t}}\] with suitably chosen coefficients \(a_{\boldsymbol{\ell}^{1},\boldsymbol{\ell}^{2}}^{\mathbf{k}}\in\mathbb{C}\). By using the identities \(\operatorname{Re}\left(z_{t}\right)=\frac{1}{2}\left(z_{t}+\overline{z_{t}}\right)\) and also \(\operatorname{Im}\left(z_{t}\right)=\frac{1}{2i}\left(z_{t}-\overline{z_{t}}\right)\) we can rewrite \(T_{\mathbf{k}}\circ\varphi_{n}^{-1}\) into a complex polynomial in \(z\) and \(\overline{z}\), i.e. \[\left(T_{\mathbf{k}}\circ\varphi_{n}^{-1}\right)(z)=\sum_{\begin{subarray}{c }\boldsymbol{\ell}^{1},\boldsymbol{\ell}^{2}\in\mathbb{N}_{0}^{\infty}\\ \boldsymbol{\ell}^{1},\boldsymbol{\ell}^{2}\leq 4M-2\end{subarray}}b_{ \boldsymbol{\ell}^{1},\boldsymbol{\ell}^{2}}^{\mathbf{k}}z^{\boldsymbol{\ell}^ {1}}\overline{z}^{\boldsymbol{\ell}^{2}}\] with complex coefficients \(b_{\boldsymbol{\ell}^{1},\boldsymbol{\ell}^{2}}^{\mathbf{k}}\in\mathbb{C}\). Using Theorem 3.5, we choose \(\rho_{1},...,\rho_{m}\in\mathbb{C}^{n}\), such that for any polynomial \(P\in\left\{T_{\mathbf{k}}\circ\varphi_{n}^{-1}:\;\mathbf{k}\leq 2M-1\right\} \subseteq\mathcal{P}_{4M-2}\) there are coefficients \(\sigma_{1}(P),...,\sigma_{m}(P)\in\mathbb{C}\), such that \[\left\|g_{P}-P\right\|_{L^{\infty}(\Omega_{n})}\leq M^{-k-n}, \tag{4.2}\] where \[g_{P}:=\sum_{t=1}^{m}\sigma_{h}\phi\left(\rho_{t}^{T}z+b\right).\] Note that here we implicitly use the bound \((4\cdot(4M-2)+1)^{2n}\leq m\). We are now going to show that the chosen constant and the chosen vectors \(\rho_{t}\) have the desired property. Let \(f\in C^{k}\left(\Omega_{n};\mathbb{C}\right)\). By splitting \(f\) into real and imaginary part, we write \(f=f_{1}+i\cdot f_{2}\) with \(f_{1},f_{2}\in C^{k}\left(\Omega_{n};\mathbb{R}\right)\). For the following, fix \(j\in\{1,2\}\) and note that \(f_{j}\circ\varphi_{n}\in C^{k}\left([-1,1]^{2n};\mathbb{R}\right)\). By choice of \(c_{1}\), there is a polynomial \(P\) with the property \[\left\|f_{j}\circ\varphi_{n}-P\right\|_{L^{\infty}([-1,1]^{2n})}\leq\frac{c_{ 1}}{M^{k}}\cdot\left\|f_{j}\circ\varphi_{n}\right\|_{C^{k}([-1,1]^{2n}; \mathbb{R})}\] or equivalently \[\left\|f_{j}-P\circ\varphi_{n}^{-1}\right\|_{L^{\infty}(\Omega_{n})}\leq\frac{ c_{1}}{M^{k}}\cdot\left\|f_{j}\circ\varphi_{n}\right\|_{C^{k}([-1,1]^{2n}; \mathbb{R})}, \tag{4.3}\] where \(P\circ\varphi_{n}^{-1}\) can be written in the form \[\left(P\circ\varphi_{n}^{-1}\right)(z)=\sum_{0\leq\mathbf{k}\leq 2M-1}\mathcal{V}_{ \mathbf{k}}^{M}\left(f_{j}\circ\varphi_{n}\right)\cdot\left(T_{\mathbf{k}} \circ\varphi_{n}^{-1}\right)(z).\] We choose the function \(g_{T_{\mathbf{k}}\circ\varphi_{n}^{-1}}\) according to (4.2). Thus, writing \[g_{j}:=\sum_{0\leq\mathbf{k}\leq 2M-1}\mathcal{V}_{\mathbf{k}}^{M}\left(f_{j} \circ\varphi_{n}\right)\cdot g_{T_{\mathbf{k}}\circ\varphi_{n}^{-1}},\] we obtain \[\left\|P\circ\varphi_{n}^{-1}-g_{j}\right\|_{L^{\infty}(\Omega_{n})} \leq\sum_{0\leq\mathbf{k}\leq 2M-1}\left|\mathcal{V}_{\mathbf{k}}^{M} \left(f_{j}\circ\varphi_{n}\right)\right|\cdot\underbrace{\left\|T_{\mathbf{k}} \circ\varphi_{n}^{-1}-g_{T_{\mathbf{k}}\circ\varphi_{n}^{-1}}\right\|_{L^{ \infty}(\Omega_{n})}}_{\leq M^{-k-n}}\] \[\leq M^{-k-n}\cdot\sum_{0\leq\mathbf{k}\leq 2M-1}\left|\mathcal{V}_ {\mathbf{k}}^{M}\left(f_{j}\circ\varphi_{n}\right)\right|\] \[\leq\frac{c_{2}}{M^{k}}\left\|f_{j}\circ\varphi_{n}\right\|_{C^{ k}([-1,1]^{2n};\mathbb{R})}\leq\frac{c_{2}}{M^{k}}\left\|f_{j}\right\|_{C^{k}( \Omega_{n};\mathbb{C})}. \tag{4.4}\] Combining (4.3) and (4.4) we see \[\left\|f_{j}-g_{j}\right\|_{L^{\infty}(\Omega_{n})}\leq\frac{c_{1}+c_{2}}{M^ {k}}\cdot\left\|f_{j}\right\|_{C^{k}(\Omega_{n};\mathbb{R})}\leq\frac{c_{1}+c_ {2}}{M^{k}}\cdot\left\|f\right\|_{C^{k}(\Omega_{n};\mathbb{C})}.\] In the end, define \[g:=g_{1}+i\cdot g_{2}.\] Since the vectors \(\rho_{t}\) have been chosen fixed, it is clear that, after rearranging, \(g\) has the desired form. Furthermore, one obtains the bound \[\left\|f-g\right\|_{L^{\infty}(\Omega_{n})} \leq\sqrt{\left\|f_{1}-g_{1}\right\|_{L^{\infty}(\Omega_{n})}^{2} +\left\|f_{2}-g_{2}\right\|_{L^{\infty}(\Omega_{n})}^{2}}\] \[\leq\frac{c_{1}+c_{2}}{M^{k}}\cdot\sqrt{\left\|f\right\|_{C^{k}( \Omega_{n};\mathbb{C})}^{2}+\left\|f\right\|_{C^{k}(\Omega_{n};\mathbb{C})}^{ 2}}\] \[\leq\frac{\sqrt{2}\cdot(c_{1}+c_{2})}{M^{k}}\cdot\ \left\|f\right\|_{C^{k}( \Omega_{n};\mathbb{C})}.\] Using (4.1), we see \[\left\|f-g\right\|_{L^{\infty}(\Omega_{n};\mathbb{C})}\leq\frac{\sqrt{2}\cdot 2 5^{k}\cdot(c_{1}+c_{2})}{m^{k/2n}}\cdot\ \left\|f\right\|_{C^{k}(\Omega_{n};\mathbb{C})},\] as desired. The linearity and continuity of the maps \(f\mapsto\sigma_{j}(f)\) follow directly from the fact that the map \(f\mapsto\mathcal{V}_{\mathbf{k}}^{M}(f)\) is a continuous linear functional for every \(0\leq\mathbf{k}\leq 2M-1\). ## 5. Concrete examples of admissible functions In this section we want to show the admissibility of concrete activation functions that are commonly used when applying complex-valued neural networks to machine learning. Proposition 5.1 introduces a large class of admissible functions. **Proposition 5.1**.: _Let \(\rho\in C^{\infty}(\mathbb{R};\mathbb{R})\) be non-polynomial and let \(\psi:\mathbb{C}\to\mathbb{C}\) be defined as_ \[\psi(z):=\rho(\operatorname{Re}(z))\quad\text{resp.}\quad\psi(z):=\rho( \operatorname{Im}(z)).\] _Then \(\psi\) is admissible._ Proof.: Since \(\psi\) depends only on the real resp. imaginary part of the input, we see directly from the definition of the Wirtinger derivatives that \[\partial_{\mathrm{wirt}}\psi(z)=\overline{\partial}_{\mathrm{wirt}}\psi(z)= \frac{1}{2}\rho^{\prime}(\operatorname{Re}(z))\quad\text{resp.}\quad\partial_ {\mathrm{wirt}}\psi(z)=-\overline{\partial}_{\mathrm{wirt}}\psi(z)=-\frac{i}{ 2}\rho^{\prime}\operatorname{Im}(z).\] Hence we see for arbitrary \(m,\ell\in\mathbb{N}_{0}\) that \[\left|\partial_{\mathrm{wirt}}^{m}\overline{\partial}_{\mathrm{wirt}}^{\ell} \psi(z)\right|=\frac{1}{2^{m+\ell}}\left|\rho^{(m+\ell)}(\operatorname{Re}(z) )\right|\quad\text{resp.}\quad\left|\partial_{\mathrm{wirt}}^{m}\overline{ \partial}_{\mathrm{wirt}}^{\ell}\psi(z)\right|=\frac{1}{2^{m+\ell}}\left|\rho^ {(m+\ell)}(\operatorname{Im}(z))\right|.\] Since \(\rho\) is non-polynomial we can choose a real number \(x\), such that all derivatives of \(\rho\) in \(x\) do not vanish (cf. for instance [12, p. 53]). Thus, all the Wirtinger derivatives \(\partial_{\mathrm{wirt}}^{m}\overline{\partial}_{\mathrm{wirt}}^{\ell}\psi\) do not vanish for all complex numbers with real, resp. imaginary part \(x\). In the following, we consider a special activation function, which has been proposed in [4]. Its representative power has already been discussed to some extent in [8]. **Definition 5.2**.: For \(b\in(-\infty,0)\) we define \[\sigma_{\mathrm{modReLU},b}:\quad\mathbb{C}\to\mathbb{C},\quad\sigma_{\mathrm{ modReLU},b}(z):=\begin{cases}(|z|+b)\frac{z}{|z|},&|z|+b\geq 0,\\ \qquad 0,&\mathrm{otherwise}.\end{cases}\] **Theorem 5.3**.: _Let \(b\in(-\infty,0)\). Writing \(\sigma=\sigma_{\mathrm{modReLU},b}\) one has for every \(z\in\mathbb{C}\) with \(|z|+b>0\) the identity_ \[\left(\partial_{\mathrm{wirt}}^{m}\overline{\partial}_{\mathrm{wirt}}^{\ell} \sigma\right)(z)=\begin{cases}z+b\frac{z}{|z|},&m=\ell=0,\\ 1+\frac{b}{2}\cdot\frac{1}{|z|},&m=1,\ell=0,\\ b\cdot q_{m,\ell}\cdot\frac{\frac{\ell-m+1}{|z|^{2m+1}}}{|z|^{2m+1}},&m\leq\ell +1,\ell\geq 1,\\ b\cdot q_{m,\ell}\cdot\frac{\overline{z}-m-\ell-1}{|z|^{2m-1}},&m\geq\ell+1,m \geq 2\end{cases}\] _for every \(m\in\mathbb{N}_{0}\) and \(\ell\in\mathbb{N}_{0}\). Here, the numbers \(q_{m,\ell}\) are non-zero and rational. Furthermore, note that all cases for choices of \(m\) and \(\ell\) are covered by observing that we can either have the case \(m\geq\ell+1\) (where either \(m\geq 2\) or \(m=1,\ell=0\)) or \(m\leq\ell+1\) where the latter is again split into \(\ell=0\) and \(\ell\geq 1\)._ Proof.: We first calculate certain Wirtinger derivatives for \(z\neq 0\). First note \[\overline{\partial}_{\mathrm{wirt}}\left(\frac{1}{|z|^{m}}\right) =\frac{1}{2}\left(\partial^{(1,0)}\left(\frac{1}{|z|^{m}}\right) +i\cdot\partial^{(0,1)}\left(\frac{1}{|z|^{m}}\right)\right)\] \[=\frac{1}{2}\left(\left(-\frac{m}{2}\right)\frac{2\operatorname{ Re}(z)+i\cdot 2\operatorname{Im}(z)}{|z|^{m+2}}\right)\] \[=-\frac{m}{2}\cdot\frac{z}{|z|^{m+2}}\] and similarly \[\partial_{\mathrm{wirt}}\left(\frac{1}{|z|^{m}}\right)=-\frac{m}{2}\cdot\frac {\overline{z}}{|z|^{m+2}}\] for any \(m\in\mathbb{N}\). Using the product rule for Wirtinger derivatives, we see \[\overline{\partial}_{\mathrm{wirt}}\left(\frac{z^{\ell}}{|z|^{m}}\right)= \underbrace{\overline{\partial}_{\mathrm{wirt}}\left(z^{\ell}\right)}_{=0} \cdot\frac{1}{|z|^{m}}+z^{\ell}\cdot\overline{\partial}_{\mathrm{wirt}} \left(\frac{1}{|z|^{m}}\right)=-\frac{m}{2}\cdot\frac{z^{\ell+1}}{|z|^{m+2}} \tag{5.1}\] for any \(m\in\mathbb{N}\) and \(\ell\in\mathbb{N}_{0}\) and furthermore \[\partial_{\mathrm{wirt}}\left(\frac{z^{\ell}}{|z|^{m}}\right) =\partial_{\mathrm{wirt}}\left(z^{\ell}\right)\cdot\frac{1}{|z|^ {m}}+z^{\ell}\cdot\partial_{\mathrm{wirt}}\left(\frac{1}{|z|^{m}}\right)\] \[=\ell\cdot z^{\ell-1}\cdot\frac{1}{|z|^{m}}-z^{\ell}\cdot\frac{m} {2}\cdot\frac{\overline{z}}{|z|^{m+2}}\] \[=\left(\ell-\frac{m}{2}\right)\cdot\frac{z^{\ell-1}}{|z|^{m}} \tag{5.2}\] for \(m,\ell\in\mathbb{N}\), and finally \[\partial_{\mathrm{wirt}}\left(\frac{\overline{z}^{\ell}}{|z|^{m}}\right)= \underbrace{\partial_{\mathrm{wirt}}\left(\overline{z}^{\ell}\right)}_{=0} \cdot\frac{1}{|z|^{m}}+\overline{z}^{\ell}\cdot\partial_{\mathrm{wirt}} \left(\frac{1}{|z|^{m}}\right)=-\frac{m}{2}\cdot\frac{\overline{z}^{\ell+1}} {|z|^{m+2}} \tag{5.3}\] for \(m\in\mathbb{N}\) and \(\ell\in\mathbb{N}_{0}\). Having proven those three identities, we can start with the actual computation. We first fix \(m=0\) and prove the claimed identity by induction over \(\ell\). The case \(\ell=0\) is just the definition of the function and furthermore, we calculate \[\overline{\partial}_{\mathrm{wirt}}\sigma(z)=\underbrace{\overline{\partial}_ {\mathrm{wirt}}(z)}_{=0}+b\cdot\overline{\partial}_{\mathrm{wirt}}\left(\frac{ z}{|z|}\right)\overset{\eqref{eq:2.1}}{=}b\cdot\left(-\frac{1}{2}\right)\frac{z^{2}}{|z|^{3}},\] which is the claimed form. Then, using induction, we compute \[\overline{\partial}_{\rm wirt}^{\ell+1}\sigma(z)=\overline{\partial}_{\rm wirt} \left(b\cdot q_{0,\ell}\cdot\frac{z^{\ell+1}}{|z|^{2\ell+1}}\right)\stackrel{{ \eqref{eq:def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_ \Rightarrow:q_{0,\ell+1}}}{=:q_{0,\ell+1}}\cdot\frac{z^{\ell+2}}{|z|^{2 \ell+3}},\] so that the case \(m=0\) is complete. Now we deal with the case \(m\leq\ell+1\). The case \(\ell=0\) is proven by computing \[\partial_{\rm wirt}\sigma(z)=\partial_{\rm wirt}(z)+b\cdot\partial_{\rm wirt} \left(\frac{z}{|z|}\right)\stackrel{{\eqref{eq:def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def_def__def_def_def_def_def__def_def_def__def_def__def_def_def_def_def__def_def_def__def_def__def_def__def__def_def__def__def_def__def__def__def_def__def__def_def__def__def__def_def__def__def__def_def__def__def__def__def__def__def__def_def__def__def_def__def__def_def__def__def__def__def_def__def__def_def__def__def_def__def_def__def__def__def_def__def_def__def_def__def_def__def_def__def_def__def_def__def_def_def__def_def__def_def_def__def_def__def_def__def_def_def__def_def_def__def_ _Here, the numbers \(a_{m,\ell}\) and \(b_{m,\ell}\) are again non-zero and rational. Furthermore, note that all cases for possible choices of \(m\) and \(\ell\) are covered: The case \(m\leq\ell\) is split into \(\ell=0\) and \(\ell\neq 0\). The case \(m=\ell+1\) is split into \(m=1\) and \(m>1\). Then, the case \(m\geq\ell+2\) remains._ Proof.: For the following we always assume \(z\in\mathbb{C}\) with \(z\neq 0\). Then we can simplify \(\cos(\spherical z)=\frac{\operatorname{Re}(z)}{|z|}\), so we can rewrite \[\operatorname{card}(z)=\frac{1}{2}\left(1+\frac{\operatorname{Re}(z)}{|z|} \right)z=\frac{1}{2}z+\frac{1}{4}\frac{(z+\overline{z})\,z}{|z|}=\frac{1}{2}z +\frac{1}{4}\frac{z^{2}}{|z|}+\frac{|z|}{4}.\] First, we compute \[\overline{\partial}_{\operatorname{wirt}}\left(|z|\right)=\frac{1}{2}\left( \frac{1}{2}\frac{2\operatorname{Re}(z)}{|z|}+\frac{i}{2}\frac{2\operatorname {Im}(z)}{|z|}\right)=\frac{1}{2}\frac{z}{|z|} \tag{5.4}\] and similarly \[\partial_{\operatorname{wirt}}\left(|z|\right)=\frac{1}{2}\frac{\overline{z} }{|z|}. \tag{5.5}\] We deduce \[\overline{\partial}_{\operatorname{wirt}}\operatorname{card}(z)\overset{\eqref {eq:1.1},\eqref{eq:2.1}}{=}\underbrace{\frac{1}{4}\cdot\left(-\frac{1}{2} \right)}_{=:b_{0,1}}\cdot\frac{z^{3}}{|z|^{3}}+\underbrace{\frac{1}{8}}_{=:a_ {0,1}}\cdot\frac{z}{|z|}.\] Using induction, we derive \[\overline{\partial}_{\operatorname{wirt}}^{\ell+1}\operatorname{ card}(z) =a_{0,\ell}\overline{\partial}_{\operatorname{wirt}}\left(\frac{z^{\ell}}{|z|^{2 \ell-1}}\right)+b_{0,\ell}\overline{\partial}_{\operatorname{wirt}}\left( \frac{z^{\ell+2}}{|z|^{2\ell+1}}\right)\] \[\overset{\eqref{eq:1.1}}{=:a_{0,\ell}\cdot\left(-\frac{2\ell-1} {2}\right)}\cdot\frac{z^{\ell+1}}{|z|^{2\ell+1}}+\underbrace{b_{0,\ell}\cdot \left(-\frac{2\ell+1}{2}\right)}_{=:b_{0,\ell+1}}\cdot\frac{z^{\ell+3}}{|z|^{2 \ell+3}},\] so the claim has been shown if \(m=0\). If we now fix any \(\ell\in\mathbb{N}\) and assume that the claim holds true for some \(m\in\mathbb{N}_{0}\) with \(m<\ell\), we get \[\partial_{\operatorname{wirt}}^{m+1}\overline{\partial}_{ \operatorname{wirt}}^{\ell}\operatorname{card}(z) =a_{m,\ell}\partial_{\operatorname{wirt}}\left(\frac{z^{\ell-m}}{|z|^{2 \ell-1}}\right)+b_{m,\ell}\partial_{\operatorname{wirt}}\left(\frac{z^{\ell+2- m}}{|z|^{2\ell+1}}\right)\] \[\overset{\eqref{eq:2.1}}{=:a_{m+1,\ell}}\cdot\frac{z^{\ell-m-1}} {|z|^{2\ell-1}}+\underbrace{b_{m,\ell}\cdot\left(\frac{3}{2}-m\right)}_{=:b_{ m+1,\ell}}\cdot\frac{z^{\ell+2-m-1}}{|z|^{2\ell+1}},\] so the claim holds true if \(m\leq\ell\). The case \(m=\ell+1\) is split into the case \(m=1\) and \(m>1\). If \(m=1\), then \(\ell=0\) and we compute \[\partial_{\operatorname{wirt}}\operatorname{card}(z)\overset{\eqref{eq:2.1}, \eqref{eq:2.1}}{=}\frac{1}{2}+\frac{1}{4}\left(2-\frac{1}{2}\right)\cdot\frac {z}{|z|}+\frac{1}{8}\cdot\frac{\overline{z}}{|z|}.\] If \(m>1\) we get \[\partial_{\operatorname{wirt}}^{\ell+1}\overline{\partial}_{ \operatorname{wirt}}^{\ell}\operatorname{card}(z) =a_{\ell,\ell}\partial_{\operatorname{wirt}}\left(\frac{1}{|z|^{2\ell-1}} \right)+b_{\ell,\ell}\cdot\partial_{\operatorname{wirt}}\left(\frac{z^{2}}{|z |^{2\ell+1}}\right)\] \[\overset{\eqref{eq:2.1},\eqref{eq:2.1}}{=:a_{\ell+1,\ell}} \cdot\frac{\overline{z}}{|z|^{2\ell+1}}+\underbrace{b_{\ell,\ell}\cdot\left(2 -\frac{2\ell+1}{2}\right)}_{=:b_{\ell+1,\ell}}\cdot\frac{z}{|z|^{2\ell+1}}.\] Next, we deal with the case \(m=\ell+2\): Here we see \[\partial_{\operatorname{wirt}}^{\ell+2}\overline{\partial}_{ \operatorname{wirt}}^{\ell}\operatorname{card}(z) =\partial_{\operatorname{wirt}}\left(\frac{1}{2}\delta_{(m,\ell)=(1,0)}+a_{ \ell+1,\ell}\frac{\overline{z}}{|z|^{2\ell+1}}+b_{\ell+1,\ell}\frac{z}{|z|^{2 \ell+1}}\right)\] \[\overset{\eqref{eq:2.1},\eqref{eq:2.1}}{=:a_{0,\ell+1}}\cdot \frac{\overline{z}}{|z|^{2\ell+1}}+\underbrace{b_{\ell,\ell}\cdot\left(2-\frac{ 2\ell+1}{2}\right)}_{=:b_{\ell+1,\ell}}\cdot\frac{z}{|z|^{2\ell+1}}.\] Next, we deal with the case \(m=\ell+2\): Here we see \[\partial_{\operatorname{wirt}}^{\ell+2}\overline{\partial}_{ \operatorname{wirt}}^{\ell}\operatorname{card}(z) =\partial_{\operatorname{wirt}}\left(\frac{1}{2}\delta_{(m,\ell)=(1,0)}+a_{ \ell+1,\ell}\frac{\overline{z}}{|z|^{2\ell+1}}+b_{\ell+1,\ell}\frac{z}{|z|^{2 \ell+1}}\right)\] \[\overset{\eqref{eq:2.1},\eqref{eq:2.1}}{=:a_{0,\ell+1}}\cdot \frac{\overline{z}}{|z|^{2\ell+1}}+\underbrace{b_{\ell,\ell}\cdot\left(2-\frac{ 2\ell+1}{2}\right)}_{=:b_{\ell+1,\ell}}\cdot\frac{z}{|z|^{2\ell+1}}.\] Next, we deal with the case \(m=\ell+2\): Here we see \[\partial_{\operatorname{wirt}}^{\ell+2}\overline{\partial}_{ \operatorname{wirt}}^{\ell}\operatorname{card}(z) =\partial_{\operatorname{wirt}}\left(\frac{1}{2}\delta_{(m,\ell)=(1,0)}+a_{ \ell+1,\ell}\frac{\overline{z}}{|z|^{2\ell+1}}+b_{\ell+1,\ell}\frac{z}{|z|^{2 \ell+1}}\right)\] \[\overset{\eqref{eq:2.1},\eqref{eq:2.1}}{=:a_{0,\ell+1}}\cdot \frac{\overline{z}}{|z|^{2\ell+1}}+\underbrace{b_{\ell,\ell}\cdot\left(2-\frac{ 2\ell+1}{2}\right)}_{=:b_{\ell+1,\ell}}\cdot\frac{z}{|z|^{2\ell+1}}.\] \[\overset{\eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eq:eqeq:eqeq:eqeq:eq:eq:eq:eq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeqeq:eqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeq:eqeqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeq:eqeqeqeqeqeqeqeqeqeqeqeq: **Theorem 6.1**.: _Let \(s,k\in\mathbb{N}\). Then there is a constant \(c=c(s,k)>0\) with the following property: For any \(m\in\mathbb{N}\) and any map \(\overline{a}:C^{k}([-1,1]^{s};\mathbb{R})\to\mathbb{R}^{m}\) that is continuous with respect to some norm on \(C^{k}([-1,1]^{s};\mathbb{R})\) and any (possibly discontinuous) map \(M_{m}:\mathbb{R}^{m}\to C([-1,1]^{s};\mathbb{R})\) we have_ \[\sup_{\begin{subarray}{c}f\in C^{k}([-1,1]^{s};\mathbb{R})\\ \|f\|_{C^{k}([-1,1]^{s};\mathbb{R})}\leq 1\end{subarray}}\|f-M_{m}(\overline{a} (f))\|_{L^{\infty}([-1,1]^{s};\mathbb{R})}\geq c\cdot m^{-k/s}.\] Using this result we can prove the following theorem in the context of CVNNs. **Theorem 6.2**.: _Let \(n\in\mathbb{N}\) and \(k\in\mathbb{N}\). Then there is a constant \(c=c(n,k)>0\) with the following property: For any \(m\in\mathbb{N}\), \(\phi\in C(\mathbb{C};\mathbb{C})\) and any map_ \[\eta:\quad C^{k}\left(\Omega_{n};\mathbb{C}\right)\to\left(\mathbb{C}^{n} \right)^{m}\times\mathbb{C}^{m}\times\mathbb{C}^{m},\quad g\mapsto\left(\eta _{1}(g),\eta_{2}(g),\eta_{3}(g)\right)\] _which is continuous with respect to some norm on \(C^{k}(\Omega_{n};\mathbb{C})\), there exists \(f\in C^{k}\left(\Omega_{n};\mathbb{C}\right)\) satisfying \(\|f\|_{C^{k}(\Omega_{n};\mathbb{C})}\leq 1\) and_ \[\|f-\Phi(f)\|_{L^{\infty}(\Omega_{n};\mathbb{C})}\geq c\cdot m^{-k/(2n)},\] _where \(\Phi(f)\in C(\Omega_{n};\mathbb{C})\) is given by_ \[\Phi(f)(z):=\sum_{j=1}^{m}\left(\eta_{3}(f)\right)_{j}\phi\left(\left[\eta_{1} (f)\right]_{j}^{T}z+\left(\eta_{2}(f)\right)_{j}\right).\] Proof.: The idea is to apply Theorem 6.1. This theorem yields the existence of a constant \(c_{1}=c_{1}(n,k)\) with the property that for any \(m\in\mathbb{N}\), \(\phi\in C(\mathbb{C};\mathbb{C})\) and any map \[\widetilde{\eta}:\quad C\left([-1,1]^{2n};\mathbb{R}\right)\to\left(\mathbb{C }^{n}\right)^{m}\times\mathbb{C}^{m}\times\mathbb{C}^{m},\quad g\mapsto \left(\widetilde{\eta}_{1}(g),\widetilde{\eta}_{2}(g),\widetilde{\eta}_{3}(g)\right)\] which is continuous with respect to some norm on \(C^{k}([-1,1]^{2n};\mathbb{R})\) there exists \(\widetilde{f}\in C^{k}\left([-1,1]^{2n};\mathbb{R}\right)\) with \(\|\widetilde{f}\|_{C^{k}([-1,1]^{2n};\mathbb{R})}\leq 1\) satisfying \[\left\|\widetilde{f}-\mathrm{Re}\left(\widetilde{\Phi}(\widetilde{f})\right) \right\|_{L^{\infty}([-1,1]^{2n};\mathbb{R})}\geq c_{1}\cdot\left(2m(n+2) \right)^{-k/2n}=c\cdot m^{-k/(2n)} \tag{6.1}\] with \(c:=c_{1}\cdot(2(n+2))^{-k/2n}\). Here, with \(\varphi_{n}\) as introduced in (1.2) on page 5, we define the map \(\widetilde{\Phi}(\widetilde{f})\in C([-1,1]^{2n};\mathbb{C})\) via \[\widetilde{\Phi}(\widetilde{f})(x):=\sum_{j=1}^{m}\left(\widetilde{\eta}_{3}( \widetilde{f})\right)_{j}\phi\left(\left[\widetilde{\eta}_{1}(\widetilde{f}) \right]_{j}^{T}\varphi_{n}(x)+\left(\widetilde{\eta}_{2}(\widetilde{f})\right) _{j}\right),\quad x\in[-1,1]^{2n}.\] If we then have any continuous map \[\eta:\quad C^{k}\left(\Omega_{n};\mathbb{C}\right)\to\left(\mathbb{C}^{n} \right)^{m}\times\mathbb{C}^{m}\times\mathbb{C}^{m},\quad g\mapsto\left(\eta _{1}(g),\eta_{2}(g),\eta_{3}(g)\right),\] which is continuous with respect to some norm \(\|\cdot\|_{V}\) on \(C^{k}\left(\Omega_{n};\mathbb{C}\right)\), we can define the corresponding map \(\widetilde{\eta}\) via \[\widetilde{\eta}:\quad C\left([-1,1]^{2n};\mathbb{R}\right)\to\left(\mathbb{C }^{n}\right)^{m}\times\mathbb{C}^{m}\times\mathbb{C}^{m},\quad g\mapsto \left(\widetilde{\eta}_{1}(g),\widetilde{\eta}_{2}(g),\widetilde{\eta}_{3}(g)\right)\] with \[\widetilde{\eta}_{i}(g):=\eta_{i}\left(g\circ\varphi_{n}^{-1}\big{|}_{\Omega_{ n}}\right)\] for every \(g\in C\left([-1,1]^{2n};\mathbb{R}\right)\). Note that \(\widetilde{\eta}\) is continuous with respect to the norm \(\|\cdot\|_{\widetilde{V}}\) defined on \(C^{k}([-1,1]^{2n};\mathbb{R})\) as \[\|\widetilde{f}\|_{\widetilde{V}}:=\left\|\widetilde{f}\circ\varphi_{n}^{-1} \big{|}_{\Omega_{n}}\right\|_{V}\] for every \(\widetilde{f}\in C^{k}([-1,1]^{2n};\mathbb{R})\). Now take \(\widetilde{f}\in C^{k}\left([-1,1]^{2n};\mathbb{R}\right)\) with \(\|\widetilde{f}\|_{C^{k}\left([-1,1]^{2n};\mathbb{R}\right)}\leq 1\) satisfying (6.1). For the function defined as \(f:=\widetilde{f}\circ\varphi_{n}^{-1}\big{|}_{\Omega_{n}}\) we see that \(f\in C^{k}\left(\Omega_{n};\mathbb{R}\right)\subseteq C^{k}\left(\Omega_{n}; \mathbb{C}\right)\), \(\|f\|_{C^{k}\left(\Omega_{n};\mathbb{C}\right)}\leq 1\) and we furthermore deduce \[\left\|f-\Phi(f)\right\|_{L^{\infty}\left(\Omega_{n};\mathbb{C}\right)}\geq\left\| f-\operatorname{Re}(\Phi(f))\right\|_{L^{\infty}\left(\Omega_{n};\mathbb{R} \right)}=\left\|\widetilde{f}-\operatorname{Re}\left(\widetilde{\Phi}( \widetilde{f})\right)\right\|_{L^{\infty}\left([-1,1]^{2n};\mathbb{R}\right)} \overset{\eqref{eq:f We then define \(g_{j}\in C(\Omega_{1};\mathbb{R})\) by \(g_{j}(z):=\widetilde{g}_{j}\left(\mathrm{Re}(z)\right)\) for any \(j\in\{1,...,m\}\). We compute \[g_{j}\left(\left(b_{j}\right)^{T}z\right) =\widetilde{g}_{j}\left(\mathrm{Re}\left(\sum_{\ell=1}^{n}\left(b_ {j}\right)_{\ell}\cdot z_{\ell}\right)\right)\] \[=\widetilde{g}_{j}\left(\mathrm{Re}\left(\sum_{\ell=1}^{n}\left( \left(a_{j}\right)_{\ell}-i\cdot\left(a_{j}\right)_{n+\ell}\right)\left(\varphi _{n}^{-1}(z)_{\ell}+i\cdot\varphi_{n}^{-1}(z)_{n+\ell}\right)\right)\right)\] \[=\widetilde{g}_{j}\left(\sum_{\ell=1}^{n}\left[\left(a_{j}\right) _{\ell}\varphi_{n}^{-1}(z)_{\ell}+\left(a_{j}\right)_{n+\ell}\varphi_{n}^{-1} (z)_{n+\ell}\right]\right)\] \[=\widetilde{g}_{j}\left(\left(a_{j}\right)^{T}\cdot\varphi_{n}^{- 1}(z)\right) \tag{6.2}\] and deduce \[\left\|f(z)-\sum_{j=1}^{m}g_{j}\left(b_{j}^{T}z\right)\right\|_{L ^{\infty}(\Omega_{n};\mathbb{R})} =\left\|(f\circ\varphi_{n})(x)-\sum_{j=1}^{m}g_{j}\left(b_{j}^{T} \cdot\varphi_{n}(x)\right)\right\|_{L^{\infty}([-1,1]^{2n};\mathbb{R})}\] \[\overset{\eqref{eq:f_2}}{=}\left\|(f\circ\varphi_{n})(x)-\sum_{j =1}^{m}\widetilde{g}_{j}\left(a_{j}^{T}\cdot x\right)\right\|_{L^{\infty}([-1,1 ]^{2n};\mathbb{R})}\] \[\leq c_{1}\cdot m^{-k/(2n-1)}\cdot\|f\circ\varphi_{n}\|_{C^{k}([- 1,1]^{2n};\mathbb{R})}\,.\] For a function \(f\in C^{k}\left(\Omega_{n};\mathbb{C}\right)\) we can pick functions \(g_{1}^{\mathrm{Re}},...,g_{m}^{\mathrm{Re}},g_{1}^{\mathrm{Im}},...,g_{m}^{ \mathrm{Im}}\in C^{k}\left(\Omega;\mathbb{R}\right)\) satisfying \[\left\|\mathrm{Re}(f(z))-\sum_{j=1}^{m}g_{j}^{\mathrm{Re}}\left( b_{j}^{T}z\right)\right\|_{L^{\infty}(\Omega_{n};\mathbb{R})} \leq c_{1}\cdot m^{-k/(2n-1)}\cdot\left\|\mathrm{Re}\left(f\circ \varphi_{n}\right)\right\|_{C^{k}([-1,1]^{2n};\mathbb{R})},\] \[\left\|\mathrm{Im}(f(z))-\sum_{j=1}^{m}g_{j}^{\mathrm{Im}}\left( b_{j}^{T}z\right)\right\|_{L^{\infty}(\Omega_{n};\mathbb{R})} \leq c_{1}\cdot m^{-k/(2n-1)}\cdot\left\|\mathrm{Im}\left(f\circ \varphi_{n}\right)\right\|_{C^{k}([-1,1]^{2n};\mathbb{R})}.\] Defining \(g_{j}:=g_{j}^{\mathrm{Re}}+i\cdot g_{j}^{\mathrm{Im}}\) yields \[\left\|f(z)-\sum_{j=1}^{m}g_{j}\left(b_{j}^{T}z\right)\right\|_{L^{\infty}( \Omega_{n};\mathbb{C})}\leq c_{1}\cdot\sqrt{2}\cdot m^{-k/(2n-1)}\cdot\|f\|_{C ^{k}(\Omega_{n};\mathbb{C})}\,,\] completing the proof. **Lemma 6.5**.: _Let \(\{u_{\ell}\}_{\ell=1}^{\infty}\) be an enumeration of the set of complex polynomials in \(z\) and \(\overline{z}\) with coefficients in \(\mathbb{Q}+i\mathbb{Q}\). Then there exists a function \(\phi\in C^{\infty}\left(\mathbb{C};\mathbb{C}\right)\) with the following properties:_ 1. _For every_ \(\ell\in\mathbb{N}\) _and_ \(z\in\Omega_{1}\) _one has_ \[\phi(z+3\ell)=u_{\ell}(z).\] 2. \(\phi\) _is admissible._ Proof.: Let \(\psi\in C^{\infty}\left(\mathbb{C};\mathbb{R}\right)\) with \(0\leq\psi\leq 1\) and \[\psi\big{|}_{\Omega_{1}}\equiv 1,\qquad\mathrm{supp}(\psi)\subseteq\widetilde{ \Omega},\] where \(\widetilde{\Omega}:=\left\{z\in\mathbb{C}:\ \left|\mathrm{Re}\left(z\right)\right|, \left|\mathrm{Im}\left(z\right)\right|<\frac{3}{2}\right\}\). We then define \[\phi:=\sum_{\ell=1}^{\infty}u_{\ell}(\bullet-3\ell)\cdot\psi(\bullet-3\ell)+f \cdot\psi,\] where \(f(z)=e^{\operatorname{Re}(z)}\). Note that \(\phi\) is smooth since it is a locally finite sum of smooth functions. Furthermore, \(\phi\) is admissible, since the calculation in the proof of Proposition 5.1 shows for \(z\) in the interior of \(\Omega_{1}\) and \(\rho:\mathbb{R}\to\mathbb{R},\ t\mapsto e^{t}\) that \[\left|\partial_{\operatorname{wit}}^{m}\overline{\partial}_{\operatorname{wit} }^{\ell}\phi(z)\right|=\left|\partial_{\operatorname{wit}}^{m}\overline{ \partial}_{\operatorname{wit}}^{\ell}f(z)\right|=\frac{1}{2^{m+\ell}}\left| \rho^{(m+\ell)}(\operatorname{Re}(z))\right|>0\] for every \(m,\ell\in\mathbb{N}_{0}\). Finally, property (1) follows directly by construction of \(\phi\) because \[(\widetilde{\Omega}+3\ell)\cap(\widetilde{\Omega}+3\ell^{\prime})=\varnothing\] for \(\ell\neq\ell^{\prime}\). For the special activation function constructed in Lemma 6.5 we can improve the result from Theorem 4.2. **Theorem 6.6**.: _Let \(k\in\mathbb{N}\) and \(n\in\mathbb{N}\). Then there is a constant \(c=c(n,k)>0\) such that for any \(m\in\mathbb{N}\) and \(f\in C^{k}\left(\Omega_{n};\mathbb{C}\right)\) there are \(\rho_{1},...,\rho_{m}\in\mathbb{C}^{n}\) and \(\ell_{1},...,\ell_{m}\in\mathbb{N}\), such that_ \[\left\|f(z)-\sum_{j=1}^{m}\phi\left(\rho_{j}^{T}z+3\ell_{j}\right)\right\|_{L^ {\infty}(\Omega_{n})}\leq c\cdot m^{-k/(2n-1)}\cdot\left\|f\right\|_{C^{k}( \Omega_{n};\mathbb{C})}.\] _Here, \(\phi\) is the function constructed in Lemma 6.5._ Proof.: We choose the constant \(c\) according to Proposition 6.4. Let \(m\in\mathbb{N}\) and \(f\in C^{k}\left(\Omega_{n};\mathbb{C}\right)\). Again, according to Proposition 6.4, we can choose \(\rho_{1},...,\rho_{m}\in\mathbb{C}^{n}\) with \(\left\|\rho_{j}\right\|=1/\sqrt{2n}\) and functions \(g_{1},...,g_{m}\in C\left(\Omega;\mathbb{C}\right)\) with the property \[\left\|f-\sum_{j=1}^{m}g_{j}\left(\rho_{j}^{T}\cdot\bullet\right)\right\|_{L^ {\infty}(\Omega_{n})}\leq c\cdot m^{-k/(2n-1)}\cdot\left\|f\right\|_{C^{k}( \Omega_{n};\mathbb{C})}.\] Recall from Lemma 6.5 that \(\{u_{\ell}\}_{\ell=1}^{\infty}\) is an enumeration of the set of complex polynomials in \(z\) and \(\overline{z}\). Hence, using the complex version of the Stone-Weierstrass-Theorem (see for instance [13, Theorem 4.51]), pick \(\ell_{1},...,\ell_{m}\in\mathbb{N}\) such that \[\left\|g_{j}-u_{\ell_{j}}\right\|_{L^{\infty}(\Omega_{1};\mathbb{C})}\leq m^ {-1-k/(2n-1)}\cdot\left\|f\right\|_{C^{k}(\Omega_{n};\mathbb{C})} \tag{6.3}\] for every \(j\in\{1,...,m\}\). Since \(\phi\left(\bullet+3\ell\right)=u_{\ell}\) on \(\Omega_{1}\) and \(\rho_{j}^{T}z\in\Omega_{1}\) for \(j\in\{1,...,m\}\), \(\ell\in\mathbb{N}\) and \(z\in\Omega_{n}\) we estimate \[\left\|f(z)-\sum_{j=1}^{m}\phi\left(\rho_{j}^{T}z+3\ell_{j}\right) \right\|_{L^{\infty}(\Omega_{n};\mathbb{C})}\] \[\leq c\cdot m^{-k/(2n-1)}\cdot\left\|f\right\|_{C^{k}(\Omega_{n}; \mathbb{C})}+\sum_{j=1}^{m}\left\|g_{j}\left(z\right)-u_{\ell_{j}}\left(z \right)\right\|_{L^{\infty}(\Omega_{n};\mathbb{C})}\] \[\stackrel{{\eqref{eq:2}}}{{\leq}}c\cdot m^{-k/(2n-1) }\cdot\left\|f\right\|_{C^{k}(\Omega_{n};\mathbb{C})}+m^{-k/(2n-1)}\cdot\left\| f\right\|_{C^{k}(\Omega_{n};\mathbb{C})}\] \[=(c+1)\cdot m^{-k/(2n-1)}\cdot\left\|f\right\|_{C^{k}(\Omega_{n}; \mathbb{C})}.\qed\] The parameters of the neural network in Theorem 6.6 do not continuously depend on the function \(f\), which is obvious since \(\ell_{1},...,\ell_{m}\) are natural numbers. Furthermore, if the parameters depended continuously on the function \(f\), this would be a contradiction to the optimality result in Theorem 6.2. Last but not least, we show that there are certain admissible activation functions, for which one can not improve the result from Theorem 4.2, even if we do not assume a continuous weight selection. To this end, we first formulate a corresponding result in the real setting. The idea of the proof is based on [30, Theorem 4] but instead of the bound for the VC dimension of ReLU networks used in [30], we will employ a bound for the VC dimension stated in [3, Theorem 8.11]. For a detailed introduction of the concept of the VC dimension and related topics, see for instance [26, Chapter 6]. **Theorem 6.7**.: _Let \(n,k\in\mathbb{N}\) and_ \[\phi:\quad\mathbb{R}\to\mathbb{R},\quad\phi(x):=\frac{1}{1+e^{-x}}\] _be the sigmoid function. Then there is a constant \(c=c(n,k)>0\) with the following property: If \(\varepsilon\in(0,\frac{1}{2})\) and \(m\in\mathbb{N}\) are such that for every function \(f\in C^{k}\left([-1,1]^{n};\mathbb{R}\right)\) with \(\left\|f\right\|_{C^{k}\left([-1,1]^{n};\mathbb{R}\right)}\leq 1\) there exist coefficients \(\rho_{1},...,\rho_{m}\in\mathbb{R}^{n}\), \(\eta_{1},...,\eta_{m}\in\mathbb{R}\) and \(\sigma_{1},...,\sigma_{m}\in\mathbb{R}\) satisfying_ \[\left\|f(x)-\sum_{j=1}^{m}\sigma_{j}\cdot\phi\left(\rho_{j}^{T}x+\eta_{j} \right)\right\|_{L^{\infty}\left([-1,1]^{n};\mathbb{R}\right)}\leq\varepsilon,\] _then necessarily_ \[m\geq c\cdot\frac{\varepsilon^{-n/k}}{\ln\left(1/\varepsilon\right)}.\] Proof.: We first pick a function \(\psi\in C^{\infty}\left(\mathbb{R}^{n};\mathbb{R}\right)\) with the property that \(\psi(0)=1\) and \(\psi(x)=0\) for every \(x\in\mathbb{R}^{n}\) with \(\left|x\right|>\frac{1}{4}\). We then choose \[c_{1}=c_{1}(n,k):=\left(\left\|\psi\right\|_{C^{k}\left([-1,1]^{n};\mathbb{R} \right)}\right)^{-1}.\] Now, let \(\varepsilon\in(0,\frac{1}{2})\) and \(m\in\mathbb{N}\) be arbitrary with the property stated in the formulation of the theorem. If \(\varepsilon>\frac{c_{1}}{2}\cdot\frac{1}{6^{k}}\), then \(m\geq c\cdot\frac{\varepsilon^{-n/k}}{\ln(1/\varepsilon)}\) trivially holds (as long as \(c=c(n,k)\) is sufficiently small). Hence, we can assume that \(\varepsilon\leq\frac{c_{1}}{2}\cdot\frac{1}{6^{k}}\). Now, let \(N\) be the smallest integer with \(N\geq 2\), such that \[\frac{c_{1}}{2^{k+1}}\cdot N^{-k}\leq\varepsilon.\] Note that this implies \[N^{k}\geq\frac{c_{1}}{\varepsilon}\cdot\frac{1}{2^{k+1}}\geq\frac{c_{1}}{2^{k +1}}\cdot\frac{2}{c_{1}}\cdot 6^{k}=3^{k}\] and hence \(N\geq 3\), whence \(N-1\geq 2\). Therefore, since \(\frac{N}{2}\leq N-1\) because of \(N\geq 2\), it follows that \[\varepsilon<\frac{c_{1}}{2^{k+1}}\cdot(N-1)^{-k}\leq\frac{c_{1}}{2^{k+1}}2^{k }\cdot N^{-k}=\frac{c_{1}}{2}\cdot N^{-k}. \tag{6.4}\] Now, for every \(\alpha\in\{-N,...,N\}^{n}\) pick \(z_{\alpha}\in\{0,1\}\) arbitrary and let \(y_{\alpha}:=z_{\alpha}c_{1}N^{-k}\). Define the function \[f(x):=\sum_{\alpha\in\{-N,...,N\}^{n}}y_{\alpha}\psi\left(Nx-\alpha\right), \quad x\in\mathbb{R}^{n}.\] Clearly, \(f\in C^{\infty}(\mathbb{R}^{n};\mathbb{R})\). Furthermore, for any multi-index \(\mathbf{k}\in\mathbb{N}_{0}^{n}\) with \(\left|\mathbf{k}\right|\leq k\) we have \[\left\|\partial^{\mathbf{k}}f\right\|_{L^{\infty}\left([-1,1]^{n};\mathbb{R} \right)}\leq N^{\left|\mathbf{k}\right|}\cdot\max_{\alpha}\left|y_{\alpha} \right|\cdot\left\|\partial^{\mathbf{k}}\psi\right\|_{L^{\infty}\left([-1,1]^ {n};\mathbb{R}\right)}\leq N^{k}\cdot\max_{\alpha}\left|y_{\alpha}\right|\cdot \left\|\psi\right\|_{C^{k}\left([-1,1]^{n};\mathbb{R}\right)}\leq 1,\] so we conclude that \(\left\|f\right\|_{C^{k}\left([-1,1]^{n};\mathbb{R}\right)}\leq 1\). Additionally, for any fixed \(\beta\in\{-N,...,N\}^{n}\) we see \[f\left(\frac{\beta}{N}\right)=y_{\beta},\] which follows from the choice of \(\psi\). By assumption, we can choose suitable coefficients \(\rho_{1},...,\rho_{m}\in\mathbb{R}^{n}\), \(\eta_{1},...,\eta_{m}\in\mathbb{R}\) and \(\sigma_{1},...,\sigma_{m}\in\mathbb{R}\) such that \[\left\|f-g\right\|_{L^{\infty}\left([-1,1]^{n};\mathbb{R}\right)}\leq\varepsilon\] with \[g:=\sum_{j=1}^{m}\sigma_{j}\cdot\phi\left(\rho_{j}^{T}\cdot\bullet+\eta_{j}\right).\] Letting \[\widetilde{g}:=g(\bullet/N)=\sum_{j=1}^{m}\sigma_{j}\cdot\phi\left(\frac{\rho_{ j}^{T}}{N}\cdot\bullet+\eta_{j}\right) \tag{6.5}\] we see for every \(\alpha\in\{-N,...,N\}^{n}\) that \[\widetilde{g}(\alpha)=g\left(\frac{\alpha}{N}\right)=\begin{cases}\geq y_{ \alpha}-\varepsilon=c_{1}N^{-k}-\varepsilon\overset{\eqref{eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eqeq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eq:eqeq:eqeq:eqeq:eq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eqeq:eq:eqeqeq:eqeq:eqeq:eqeqeq:eqeqeq:eqeq:eqeqeq:eqeqeq:eqeq:eq:eqeq:eqeqeq:eqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeq:eqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeq:eqeqeqeqeq:eqeqeqeqeq: Proof.: We choose the constant \(c=c(2n,k)\) according to the previous Theorem 6.7 and let \(\varphi_{n}\) as in (1.2). Then, let \(\varepsilon\in(0,\frac{1}{2})\) and \(m\in\mathbb{N}\) with the properties assumed in the statement of the corollary. If we then take an arbitrary function \(f\in C^{k}\left([-1,1]^{2n};\mathbb{R}\right)\) with \(\left\|f\right\|_{C^{k}\left([-1,1]^{2n};\mathbb{R}\right)}\leq 1\), we deduce the existence of \(\rho_{1},...,\rho_{m}\in\mathbb{C}^{n}\), \(\eta_{1},...,\eta_{m}\in\mathbb{C}\) and \(\sigma_{1},...,\sigma_{m}\in\mathbb{C}\), such that \[\left\|f(x)-\operatorname{Re}\left(\sum_{j=1}^{m}\sigma_{j}\cdot \phi\left(\rho_{j}^{T}\cdot\varphi_{n}(x)+\eta_{j}\right)\right)\right\|_{L^{ \infty}\left([-1,1]^{2n};\mathbb{R}\right)}\] \[\leq \left\|(f\circ\varphi_{n}^{-1})(z)-\sum_{j=1}^{m}\sigma_{j}\cdot \phi\left(\rho_{j}^{T}z+\eta_{j}\right)\right\|_{L^{\infty}\left(\Omega_{n}; \mathbb{C}\right)}\leq\varepsilon.\] In the next step, we want to show that \[x\mapsto\operatorname{Re}\left(\sum_{j=1}^{m}\sigma_{j}\cdot\phi\left(\rho_{j} ^{T}\cdot\varphi_{n}(x)+\eta_{j}\right)\right)\] is a real-valued shallow neural network with \(m\) neurons in the hidden layer and the real sigmoid function as activation function. Then the claim follows using Theorem 6.7. For every \(j\in\{1,...,m\}\) we pick a matrix \(\widetilde{\rho_{j}}\in\mathbb{R}^{2n\times 2}\) with the property that one has \[\widetilde{\rho_{j}}^{T}\cdot\varphi_{n}^{-1}(z)=\varphi_{1}^{-1}\left(\rho_{ j}^{T}\cdot z\right)\] for every \(z\in\mathbb{C}^{n}\). This is possible, since this is equivalent to \[\widetilde{\rho_{j}}^{T}v=\varphi_{1}^{-1}(\rho_{j}^{T}\varphi_{n}(v))\] for all \(v\in\mathbb{R}^{2n}\), where the right-hand side is an \(\mathbb{R}\)-linear map \(\mathbb{R}^{2n}\to\mathbb{R}^{2}\). Denoting the first column of \(\widetilde{\rho_{j}}\) with \(\widehat{\rho_{j}}\), we get \[\widehat{\rho_{j}}^{T}\cdot\varphi_{n}^{-1}(z)=\operatorname{Re}\left(\rho_{ j}^{T}\cdot z\right).\] Writing \(\gamma\) for the classical real-valued sigmoid function (i.e. \(\gamma(x)=\frac{1}{1+e^{-x}}\)), we deduce \[\operatorname{Re}\left(\sum_{j=1}^{m}\sigma_{j}\cdot\phi\left( \rho_{j}^{T}\cdot\varphi_{n}(x)+\eta_{j}\right)\right) =\operatorname{Re}\left(\sum_{j=1}^{m}\sigma_{j}\cdot\gamma\left( \operatorname{Re}\left(\rho_{j}^{T}\cdot\varphi_{n}(x)+\eta_{j}\right)\right)\right)\] \[=\operatorname{Re}\left(\sum_{j=1}^{m}\sigma_{j}\cdot\gamma\left( \widehat{\rho_{j}}^{T}x+\operatorname{Re}\left(\eta_{j}\right)\right)\right)\] \[=\sum_{j=1}^{m}\operatorname{Re}\left(\sigma_{j}\right)\cdot \gamma\left(\widehat{\rho_{j}}^{T}x+\operatorname{Re}\left(\eta_{j}\right) \right),\] where in the last step we used that \(\gamma\) is real-valued. As noted above, this completes the proof. As a further corollary, we can formulate the result in the context of our approximation order derived in Theorem 4.2. **Corollary 6.9**.: _Let \(n\in\mathbb{N}\), \(k\in\mathbb{N}\) and_ \[\phi:\quad\mathbb{C}\to\mathbb{C},\quad\phi(z):=\frac{1}{1+e^{-\operatorname{ Re}(z)}}.\] _Then there exists a constant \(c=c(n,k)>0\) with the following property: For any \(m\in\mathbb{N}^{\geq 2}\) there exists a function \(f\in C^{k}\left(\Omega_{n};\mathbb{C}\right)\) with \(\left\|f\right\|_{C^{k}\left(\Omega_{n};\mathbb{C}\right)}\leq 1\) such that for every choice of coefficients \(\rho_{1},...,\rho_{m}\in\mathbb{C}^{n}\), \(\eta_{1},...,\eta_{m}\in\mathbb{C}\) and \(\sigma_{1},...,\sigma_{m}\in\mathbb{C}\), we have_ \[\left\|f(z)-\sum_{j=1}^{m}\sigma_{j}\cdot\phi\left(\rho_{j}^{T}z+\eta_{j} \right)\right\|_{L^{\infty}\left(\Omega_{n};\mathbb{C}\right)}\geq c\cdot\left( m\cdot\ln(m)\right)^{-k/2n}.\] Proof.: Choose \(c=c(n,k)>0\) to be exactly determined later, where we may assume \[c\cdot(2\cdot\ln(2))^{-k/2n}<\frac{1}{2}\] and hence \(c\cdot(m\cdot\ln(m))^{-k/2n}<\frac{1}{2}\) for all \(m\in\mathbb{N}^{\geq 2}\). Assume towards a contradiction that for every \(f\in C^{k}\left(\Omega_{n};\mathbb{C}\right)\) with \(\|f\|_{C^{k}(\Omega_{n};\mathbb{C})}\leq 1\) there are coefficients \(\rho_{1},...,\rho_{m}\in\mathbb{C}^{n},\sigma_{1},...,\sigma_{m}\in\mathbb{C}\) and \(\eta_{1},...,\eta_{m}\in\mathbb{C}\) such that \[\left\|f(z)-\sum_{j=1}^{m}\sigma_{j}\cdot\phi\left(\rho_{j}^{T}z+\eta_{j} \right)\right\|_{L^{\infty}\left(\Omega_{n};\mathbb{C}\right)}<c\cdot\left(m \cdot\ln(m)\right)^{-k/2n}.\] Applying Corollary 6.8 we then get \[m\geq c_{1}\cdot\frac{\varepsilon^{-2n/k}}{\ln\left(1/\varepsilon\right)}\] with \(\varepsilon=c\cdot\left(m\cdot\ln m\right)^{-k/2n}\in\left(0,\frac{1}{2}\right)\) and \(c_{1}=c_{1}(n,k)>0\). Let \(\alpha:=2n/k\) and \(c_{2}=c_{2}(\alpha)=c_{2}(n,k)>0\) such that \(\ln x\leq c_{2}x^{\alpha/2}\) for every \(x\geq 1\). Therefore we observe \[m\geq c_{1}\cdot\frac{\varepsilon^{-\alpha}}{\ln\left(1/\varepsilon\right)} \geq\frac{c_{1}}{c_{2}}\varepsilon^{-\alpha/2},\] which implies \[\ln(m)\geq\ln\left(\frac{c_{1}}{c_{2}}\right)+\frac{\alpha}{2}\cdot\ln(1/ \varepsilon)\geq c_{3}\cdot\ln(1/\varepsilon)\] for a suitable constant \(c_{3}=c_{3}(n,k)>0\); here we used that \(\varepsilon<\frac{1}{2}\), so that \(\ln\left(1/\varepsilon\right)\geq\ln(2)\geq 0\). Overall we then get \[m\geq c_{1}\cdot\frac{\varepsilon^{-\alpha}}{\ln(1/\varepsilon)}=c_{1}\cdot c ^{-\alpha}\cdot\frac{m\cdot\ln(m)}{\ln(1/\varepsilon)}\geq c_{1}\cdot c_{3} \cdot c^{-\alpha}\cdot\frac{m\cdot\ln(m)}{\ln(m)}=c_{1}\cdot c_{3}\cdot c^{- \alpha}\cdot m.\] Thus, choosing \(c>0\) in a way such that \(c^{\alpha}<c_{1}\cdot c_{3}\) yields the desired contradiction. Since \(\alpha,c_{1}\) and \(c_{3}\) all only depend on \(n\) and \(k\), this is possible for a suitable \(c=c(n,k)>0\). The previous corollary shows that, using the activation function \[\phi:\quad\mathbb{C}\to\mathbb{C},\quad\phi(z):=\frac{1}{1+e^{-\operatorname{ Re}(z)}},\] we cannot expect an order of approximation which is higher than \(m^{-k/(2n)}\) (up to a log factor), even if we allow discontinuous weight selection. Note also that \(\phi\) is admissible because of Proposition 5.1. In particular, we see that the approximation order of \(m^{-k/(2n-1)}\) derived for the special activation function in Theorem 6.6 cannot be achieved for arbitrary admissible activation functions. ## Appendix A Divided Differences Divided differences are well-known in numerical mathematics as they are for example used to calculate the coefficients of an interpolation polynomial in its Newton representation. In our case, we are interested in divided differences, since they can be used to obtain a generalization of the classical mean-value theorem for differentiable functions: Given an interval \(I\subseteq\mathbb{R}\) and \(n+1\) pairwise distinct data points \(x_{0},...,x_{n}\in I\) as well as an \(n\)-times differentiable real-valued function \(f:I\to\mathbb{R}\), there exists \(\xi\in\left(\min\left\{x_{0},...,x_{n}\right\},\max\left\{x_{0},...,x_{n} \right\}\right)\), such that \[f\left[x_{0},...,x_{n}\right]=\frac{f^{(n)}(\xi)}{n!},\] where the left-hand side is a divided difference of \(f\) (defined below). The classical mean-value theorem is obtained in the case \(n=1\). Our goal in this section is to generalize this result to a multivariate setting by considering divided differences in multiple variables. Such a generalization is probably well-known, but since we could not locate a convenient reference and to make the paper more self-contained, we provide a proof. Let us first define divided differences formally. Given \(n+1\) data points \(\left(x_{0},y_{0}\right),...,\left(x_{n},y_{n}\right)\in\mathbb{R}\times\mathbb{R}\) with pairwise distinct \(x_{k}\), we define the associated divided differences recursively via \[\left[y_{k}\right] :=y_{k},\ k\in\{0,...,n\},\] \[\left[y_{k},...,y_{k+j}\right] :=\frac{\left[y_{k+1},...,y_{k+j}\right]-\left[y_{k},...,y_{k+j-1} \right]}{x_{k+j}-x_{k}},\ j\in\{1,...,n\},\ k\in\{0,...,n-j\}.\] If the data points are defined by a function \(f\) (i.e. \(y_{k}=f\left(x_{k}\right)\) for all \(k\in\{0,...,n\}\)), we write \[\left[x_{k},...,x_{k+j}\right]f:=\left[y_{k},...,y_{k+j}\right].\] We first consider an alternative representation of divided differences, the so-called _Hermite-Genocchi-Formula_. To state it, we introduce the notation \(\Sigma^{k}\) for a certain \(k\)-dimensional simplex. **Definition A.1**.: Let \(s\in\mathbb{N}\). Then we define \[\Sigma^{s}:=\left\{x\in\mathbb{R}^{s}:\ x_{\ell}\geq 0\ \text{for all }\ell\text{ and}\sum_{\ell=1}^{s}x_{\ell}\leq 1\right\}.\] The identity \(\lambda^{s}\left(\Sigma^{s}\right)=\frac{1}{s!}\) holds true. A proof for the fact that the volume of \(\Sigma^{s}\) is indeed \(\frac{1}{s!}\) can be found for example in [27]. We can now consider the alternative representation of divided differences. **Lemma A.2** (Hermite-Genocchi-Formula).: _For real numbers \(a,b\in\mathbb{R}\), a function \(f\in C^{k}([a,b];\mathbb{R})\) and pairwise distinct \(x_{0},...,x_{k}\in[a,b]\), the divided difference of \(f\) at the data points \(x_{0},...,x_{k}\) is given as_ \[\left[x_{0},...,x_{k}\right]f=\int_{\Sigma^{k}}f^{(k)}\left(x_{0}+\sum_{\ell =1}^{k}s_{\ell}\left(x_{\ell}-x_{0}\right)\right)ds.\] (A.1) Proof.: See [5, Theorem 3.3]. We need the following easy generalization of the mean-value theorem for integrals. **Lemma A.3**.: _Let \(D\subseteq\mathbb{R}^{s}\) be a connected and compact set with positive Lesbesgue measure and furthermore \(f:D\rightarrow\mathbb{R}\) a continuous function. Then there exists \(\xi\in D\) such that_ \[f(\xi)=\frac{1}{\lambda(D)}\cdot\int_{D}f(x)dx.\] Proof.: Since \(D\) is compact and \(f\) continuous, there exist \(x_{\min}\in D\) and \(x_{\max}\in D\) satisfying \[f\left(x_{\min}\right)\leq f(x)\leq f\left(x_{\max}\right)\] for all \(x\in D\). Thus, one gets \[f\left(x_{\min}\right)\leq\frac{1}{\lambda(D)}\int_{D}f(x)dx\leq f\left(x_{ \max}\right)\] so the claim follows using the fact that \(f(D)\subseteq\mathbb{R}\) is connected, i.e., an interval. We also get a convenient representation of divided differences if the data points are equidistant. **Lemma A.4**.: _Let \(f:\mathbb{R}\rightarrow\mathbb{R}\), \(x_{0}\in\mathbb{R}\) and \(h>0\). We consider the case of equidistant data points, meaning \(x_{j}:=x_{0}+jh\) for all \(j=1,...,k\) for a fixed \(h>0\). In this case, we have the formula_ \[\left[x_{0},...,x_{k}\right]f=\frac{1}{k!h^{k}}\cdot\sum_{r=0}^{k}(-1)^{k-r} \binom{k}{r}f\left(x_{r}\right).\] (A.2) Proof.: We prove the result via induction over the number \(j\) of considered data points, meaning the following: For all \(j\in\{0,...,k\}\) we have \[\left[x_{\ell},...,x_{\ell+j}\right]f=\frac{1}{j!h^{j}}\cdot\sum_{r=0}^{j}(-1)^{j -r}\binom{j}{r}f\left(x_{\ell+r}\right)\] for all \(\ell\in\{0,...,k\}\) such that \(\ell+j\leq k\). The case \(j=0\) is trivial. Therefore, we assume the claim to be true for a fixed \(j\in\{0,...,k-1\}\) and choose an arbitrary \(\ell\in\{0,...,k\}\) such that \(\ell+j+1\leq k\). We then get \[\left[x_{\ell},...,x_{\ell+j+1}\right]f =\frac{\left[x_{\ell+1},...,x_{\ell+j+1}\right]f-\left[x_{\ell},...,x_{\ell+j}\right]f}{x_{\ell+j+1}-x_{\ell}}\] \[=\frac{1}{j!h^{j}}\cdot\frac{\sum_{r=0}^{j}(-1)^{j-r}\binom{j}{r }\left(f\left(x_{\ell+r+1}\right)-f\left(x_{\ell+r}\right)\right)}{(j+1)h}\] \[=\frac{1}{(j+1)!h^{j+1}}\sum_{r=0}^{j}(-1)^{j-r}\binom{j}{r} \left(f\left(x_{\ell+r+1}\right)-f\left(x_{\ell+r}\right)\right).\] Using an index shift we deduce \[\sum_{r=0}^{j}(-1)^{j-r}\binom{j}{r}f\left(x_{\ell+r+1}\right)- \sum_{r=0}^{j}(-1)^{j-r}\binom{j}{r}f\left(x_{\ell+r}\right)\] \[=\sum_{r=1}^{j+1}(-1)^{j+1-r}\binom{j}{r-1}f\left(x_{\ell+r} \right)+\sum_{r=0}^{j}(-1)^{j+1-r}\binom{j}{r}f\left(x_{\ell+r}\right)\] \[=(-1)^{j+1}f\left(x_{\ell}\right)+\sum_{r=1}^{j}\left((-1)^{j+1- r}f\left(x_{\ell+r}\right)\left[\binom{j}{r-1}+\binom{j}{r}\right]\right)+f \left(x_{\ell+j+1}\right)\] \[=\sum_{r=0}^{j+1}(-1)^{j+1-r}\binom{j+1}{r}f\left(x_{\ell+r}\right)\] which yields the claim. The final result for divided differences is stated as follows: **Theorem A.5**.: _Let \(f:\mathbb{R}^{s}\to\mathbb{R}\) and \(k\in\mathbb{N}_{0},r>0\), such that \(f\big{|}_{(-r,r)^{s}}\in C^{k}\left((-r,r)^{s};\mathbb{R}\right)\). For \(\boldsymbol{p}\in\mathbb{N}_{0}^{s}\) with \(|\boldsymbol{p}|\leq k\) and \(h>0\) let_ \[f_{\boldsymbol{p},h}:=(2h)^{-|\boldsymbol{p}|}\sum_{0\leq r\leq\boldsymbol{p}} (-1)^{|\boldsymbol{p}|-|\boldsymbol{r}|}\binom{\boldsymbol{p}}{\boldsymbol{r} }f\left(h(2\boldsymbol{r}-\boldsymbol{p})\right).\] _Let \(m:=\max_{j}\,\boldsymbol{p}_{j}\). Then, for \(0<h<\frac{r}{\max\{1,m\}}\) there exists \(\xi\in h[-m,m]^{s}\) satisfying_ \[f_{\boldsymbol{p},h}=\partial^{\boldsymbol{p}}f(\xi).\] Proof.: We may assume \(m>0\), since \(\boldsymbol{p}=0\) implies \(f_{\boldsymbol{p},h}=f(0)\) and hence, the claim follows with \(\xi=0\). We prove via induction over \(s\in\mathbb{N}\) that the formula \[f_{\boldsymbol{p},h}=\boldsymbol{p}!\int_{\Sigma^{\boldsymbol{p}_{s}}}\int_{ \Sigma^{\boldsymbol{p}_{s-1}}}\cdots\int_{\Sigma^{\boldsymbol{p}_{1}}} \partial^{\boldsymbol{p}}f\left(-h\boldsymbol{p}_{1}+2h\sum_{\ell=1}^{ \boldsymbol{p}_{1}}\ell\sigma_{\ell}^{(1)},...,-h\boldsymbol{p}_{s}+2h\sum_{i =1}^{\boldsymbol{p}_{s}}\ell\sigma_{\ell}^{(s)}\right)d\sigma^{(1)}\cdots d \sigma^{(s)}\] (A.3) holds for all \(\boldsymbol{p}\in\mathbb{N}_{0}^{s}\) with \(1\leq|\boldsymbol{p}|\leq k\) and all \(0<h<\frac{r}{m}\). The case \(s=1\) is exactly the Hermite-Genocchi-Formula (A.1) combined with (A.2) applied to the data points \(-hp,-hp+2h,...,hp-2h,hp\). By induction, assume that the claim holds for some \(s\in\mathbb{N}\). For a fixed point \(y\in(-r,r)\), let \[f_{y}:\quad(-r,r)^{s}\to\mathbb{R},\quad x\mapsto f(x,y).\] For \(\mathbf{p}\in\mathbb{N}_{0}^{s+1}\) with \(|\mathbf{p}|\leq k\) and \(\mathbf{p}^{\prime}:=(p_{1},...,p_{s})\) we define \[\Gamma:\quad(-r,r)\rightarrow\mathbb{R},\quad y\mapsto(f_{y})_{\mathbf{p}^{ \prime},h}=(2h)^{-|\mathbf{p}^{\prime}|}\sum_{0\leq\mathbf{r}^{\prime}\leq \mathbf{p}^{\prime}}(-1)^{|\mathbf{p}^{\prime}|-|\mathbf{r}^{\prime}|}\binom{ \mathbf{p}^{\prime}}{\mathbf{r}^{\prime}}f\left(h(2\mathbf{r}^{\prime}- \mathbf{p}^{\prime}),y\right).\] Using the induction hypothesis, we get \[\Gamma(y)\] \[= \mathbf{p}^{\prime}!\int\limits_{\Sigma^{\mathbf{p}_{s}}}\int \limits_{\Sigma^{\mathbf{p}_{s}-1}}\cdots\int\limits_{\Sigma^{\mathbf{p}_{1}}} \partial^{(\mathbf{p}^{\prime},0)}f\left(-h\mathbf{p}_{1}+2h\sum_{i=1}^{ \mathbf{p}_{1}}i\sigma_{i}^{(1)},...,-h\mathbf{p}_{s}+2h\sum_{i=1}^{\mathbf{ p}_{s}}i\sigma_{i}^{(s)},y\right)d\sigma^{(1)}\cdots d\sigma^{(s)}\] for all \(y\in(-r,r)\). Furthermore, we calculate \[\mathbf{p}_{s+1}!\cdot[-h\cdot\mathbf{p}_{s+1},-h\cdot\mathbf{p}_ {s+1}+2h,...,h\cdot\mathbf{p}_{s+1}]\Gamma\] \[\overset{(\ref{eq:p})}{=}(2h)^{-\mathbf{p}_{s+1}}\sum_{r^{\prime }=0}^{\mathbf{p}_{s+1}}(-1)^{\mathbf{p}_{s+1}-r^{\prime}}\binom{\mathbf{p}_{s+1 }}{r^{\prime}}\Gamma\left(h\left(2r^{\prime}-\mathbf{p}_{s+1}\right)\right)\] \[=(2h)^{-|\mathbf{p}|}\sum_{0\leq\mathbf{r}\leq\mathbf{p}}(-1)^{| \mathbf{p}|-|\mathbf{r}|}\binom{\mathbf{p}}{\mathbf{r}}f\left(h(2\mathbf{r}- \mathbf{p})\right)\] \[=f_{\mathbf{p},h}.\] On the other hand, we get \[[-h\cdot\mathbf{p}_{s+1},-h\cdot\mathbf{p}_{s+1}+2h,...,h\cdot \mathbf{p}_{s+1}]\Gamma\] \[\overset{(\ref{eq:p})}{=}\int_{\Sigma^{\mathbf{p}_{s+1}}} \Gamma^{(\mathbf{p}_{s+1})}\left(-h\mathbf{p}_{s+1}+2h\sum_{\ell=1}^{\mathbf{ p}_{s+1}}\ell\ell_{t}\right)dt\] \[=\mathbf{p}^{\prime}!\int\limits_{\Sigma^{\mathbf{p}_{s+1}}} \cdots\int\limits_{\Sigma^{\mathbf{p}_{1}}}\partial^{\mathbf{p}}f\left(-h \mathbf{p}_{1}+2h\sum_{\ell=1}^{\mathbf{p}_{1}}\ell\sigma_{\ell}^{(1)},...,-h \mathbf{p}_{s+1}+2h\sum_{\ell=1}^{\mathbf{p}_{s+1}}\ell\sigma_{\ell}^{(s+1)} \right)d\sigma^{(1)}\cdots d\sigma^{(s+1)}.\] Changing the order of integration and derivative is possible, since we integrate on compact sets and only consider continuously differentiable functions. We have thus proven (A.3) using the principle of induction. The claim of the theorem then follows directly using the mean-value theorem for integrals and by the fact that all the simplices \(\Sigma^{\mathbf{p}_{\ell}}\) are compact and connected (in fact convex). ## Appendix B Prerequisites from Fourier Analysis This section is dedicated to reviewing some notations and results from Fourier Analysis. In the end, a quantitative result for the approximation of \(C^{k}\left([-1,1]^{s};\mathbb{R}\right)\)-functions using linear combinations of multivariate Chebyshev polynomials is derived. **Definition B.1**.: Let \(s\in\mathbb{N}\) and \(k\in\mathbb{N}_{0}\). We define \[C^{k}_{2\pi}\left(\mathbb{R}^{s};\mathbb{C}\right):=\left\{f\in C^{k}\left( \mathbb{R}^{s};\mathbb{C}\right):\ \forall\mathbf{p}\in\mathbb{Z}^{s}\ \forall x\in\mathbb{R}^{s}:\ f(x+2\pi \mathbf{p})=f(x)\right\}.\] For a function \(f\in C^{k}_{2\pi}\left(\mathbb{R}^{s};\mathbb{C}\right)\) we write \[\left\|f\right\|_{C^{k}\left([-\pi,\pi]^{s};\mathbb{C}\right)} :=\max_{|\mathbf{k}|\leq k}\left\|\partial^{\mathbf{k}}f\right\|_ {L^{\infty}\left([-\pi,\pi];\mathbb{C}\right)}\text{ and }\] \[\left\|f\right\|_{L^{p}\left([-\pi,\pi]^{s};\mathbb{C}\right)} :=\left(\frac{1}{(2\pi)^{s}}\cdot\int_{[-\pi,\pi]^{s}}\left|f(x) \right|^{p}dx\right)^{1/p}\text{ for }p\in[1,\infty).\] **Definition B.2**.: For any \(s\in\mathbb{N}\) and \(\mathbf{k}\in\mathbb{Z}^{s}\) we write \[e_{\mathbf{k}}:\quad\mathbb{R}^{s}\to\mathbb{C},\quad e_{\mathbf{k}}(x)=e^{i \langle\mathbf{k},x\rangle}.\] For any \(f\in C_{2\pi}\left(\mathbb{R}^{s};\mathbb{C}\right)\) we define the \(\mathbf{k}\)-th Fourier coefficient of \(f\) to be \[\widehat{f}(\mathbf{k}):=\frac{1}{(2\pi)^{s}}\int_{[-\pi,\pi]^{s}}f(x)\overline {e_{\mathbf{k}}(x)}dx.\] **Definition B.3**.: For two functions \(f,g\in C_{2\pi}\left(\mathbb{R}^{s};\mathbb{C}\right)\) we define their convolution as \[f*g:\quad\mathbb{R}^{s}\to\mathbb{C},\quad(f*g)(x):=\frac{1}{(2\pi)^{s}}\int_{ [-\pi,\pi]^{s}}f(t)g(x-t)dt.\] **Definition B.4**.: In the following we define several so-called kernels. Let \(m\in\mathbb{N}_{0}\) be arbitrary. 1. The \(m\)-th one-dimensional _Dirichlet-kernel_ is defined as \[D_{m}:=\sum_{h=-m}^{m}e_{h}.\] 2. The \(m\)-th one-dimensional _Fejer-kernel_ is defined as \[F_{m}:=\frac{1}{m}\sum_{h=0}^{m-1}D_{h}.\] 3. The \(m\)-th one-dimensional _de-la-Vallee-Poussin-kernel_ is defined as \[V_{m}:=(1+e_{m}+e_{-m})\cdot F_{m}.\] 4. Let \(s\in\mathbb{N}\). We extend the above definitions to dimension \(s\) by letting \[D_{m}^{s}\left(x_{1},...,x_{s}\right) :=\prod_{p=1}^{s}D_{m}\left(x_{p}\right),\] \[F_{m}^{s}\left(x_{1},...,x_{s}\right) :=\prod_{p=1}^{s}F_{m}\left(x_{p}\right),\] \[V_{m}^{s}\left(x_{1},...,x_{s}\right) :=\prod_{p=1}^{s}V_{m}\left(x_{p}\right).\] **Proposition B.5**.: _Let \(m,s\in\mathbb{N}\). Then one has \(\|V_{m}^{s}\|_{L^{1}\left([-\pi,\pi]^{s};\mathbb{C}\right)}\leq 3^{s}\)._ Proof.: From [23, Exercise 1.3 and Lemma 1.4] it follows \(\|F_{m}\|_{L^{1}\left([-\pi,\pi];\mathbb{C}\right)}=1\) and hence using the triangle inequality \(\|V_{m}\|_{L^{1}\left([-\pi,\pi];\mathbb{C}\right)}\leq 3\). The claim then follows using Tonelli's theorem. **Definition B.6**.: For any \(s\in\mathbb{N},m\in\mathbb{N}_{0}\) we call a function of the form \[\mathbb{R}^{s}\to\mathbb{C},\quad x\mapsto\sum_{-m\leq\mathbf{k}\leq m}a_{ \mathbf{k}}e^{i\langle\mathbf{k},x\rangle}\] with coefficients \(a_{\mathbf{k}}\in\mathbb{C}\) a _trigonometric polynomial of coordinatewise degree at most \(m\)_ and denote the space of all those functions with \(\mathbb{H}_{m}^{s}\). Here, we again consider the sum over all \(\mathbf{k}\in\mathbb{Z}^{s}\) with \(-m\leq\mathbf{k}_{j}\leq m\) for all \(j\in\{1,...,s\}\). We then write \[E_{m}^{s}(f):=\min_{T\in\mathbb{H}_{m}^{s}}\|f-T\|_{L^{\infty}\left(\mathbb{R }^{s}\right)}\] for any function \(f\in C_{2\pi}\left(\mathbb{R}^{s};\mathbb{C}\right)\). **Proposition B.7**.: _Let \(s,m\in\mathbb{N}\) and \(k\in\mathbb{N}_{0}\). The map_ \[v_{m}:\quad C_{2\pi}\left(\mathbb{R}^{s};\mathbb{C}\right)\to\mathbb{H}_{2m-1}^{ s},\quad f\mapsto f*V_{m}^{s}\] _satisfies_ \[v_{m}(T)=T\] (B.1) _for all \(T\in\mathbb{H}_{m}^{s}\). Furthermore, there is a constant \(c=c(s)>0\) (independent of \(m\)), such that_ \[\left\|v_{m}(f)\right\|_{C^{k}(\mathbb{R}^{s};\mathbb{C})}\leq c \cdot\left\|f\right\|_{C^{k}(\mathbb{R}^{s};\mathbb{C})}\ \forall f\in C_{2\pi}^{k}\left(\mathbb{R}^{s};\mathbb{C}\right),\] \[\left\|v_{m}(f)\right\|_{L^{\infty}(\mathbb{R}^{s};\mathbb{C})} \leq c\cdot\left\|f\right\|_{L^{\infty}(\mathbb{R}^{s};\mathbb{C})}\ \forall f\in C_{2\pi}\left(\mathbb{R}^{s};\mathbb{C}\right).\] (B.2) Proof.: The map is well-defined since \(V_{m}\) is a trigonometric polynomial of coordinatewise degree at most \(2m-1\). The operator is bounded with norm at most \(c=3^{s}\) as follows from Young's inequality [23, Lemma 1.1 (ii)], Proposition B.5 and the fact that one has for all \(\mathbf{k}\in\mathbb{N}_{0}^{s}\) with \(\left|\mathbf{k}\right|\leq k\) the identity \[\partial^{\mathbf{k}}\left(f*V_{m}^{s}\right)=\left(\partial^{\mathbf{k}}f \right)*V_{m}^{s}\quad\text{for }f\in C_{2\pi}^{k}(\mathbb{R}^{s};\mathbb{C}).\] It remains to show that \(v_{m}\) is the identity on \(\mathbb{H}_{m}^{s}\). We first prove that \[e_{k}*V_{m}=e_{k}\] (B.3) holds for all \(k\in\mathbb{Z}\) with \(\left|k\right|\leq m\). First note that \[e_{k}*V_{m}=e_{k}*F_{m}+e_{k}*\left(e_{m}\cdot F_{m}\right)+e_{k}*\left(e_{-m} \cdot F_{m}\right).\] We then compute \[e_{k}*F_{m}=\frac{1}{m}\sum_{\ell=0}^{m-1}D_{\ell}*e_{k}=\frac{1}{m}\sum_{\ell =0}^{m-1}\sum_{h=-\ell}^{\ell}\underbrace{e_{h}*e_{k}}_{=\delta_{k,h+\cdot e_{ k}}}=\frac{1}{m}\sum_{\ell=\left|k\right|}^{m-1}e_{k}=\frac{m-\left|k\right|}{m} \cdot e_{k}.\] Similarly, we get \[e_{k}*\left(e_{m}\cdot F_{m}\right)=\frac{1}{m}\sum_{\ell=0}^{m-1}\left(e_{m} D_{\ell}\right)*e_{k}=\frac{1}{m}\sum_{\ell=0}^{m-1}\sum_{h=-\ell}^{\ell} \underbrace{e_{h+m}*e_{k}}_{=\delta_{k,h+m\cdot e_{k}}}=\frac{1}{m}\sum_{ \begin{subarray}{c}0\leq\ell\leq m-1\\ \ell\geq m-k\end{subarray}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! **Proposition B.8**.: _Let \(s\in\mathbb{N}\), \(k\in\mathbb{N}_{0}\). Then there is a constant \(c=c(s,k)>0\), such that_ \[E_{m}^{s}(f)\leq\frac{c}{m^{k}}\cdot\|f\|_{C^{k}\left([-\pi,\pi]^{s};\mathbb{R }\right)}\] _for all \(m\in\mathbb{N}\) and \(f\in C_{2\pi}^{k}\left(\mathbb{R}^{s};\mathbb{C}\right)\)._ Proof.: We apply [19, p. 87, Theorem 6] with \(n_{i}=m\) and \(p_{i}=k\) which yields the existence of a constant \(c_{1}=c_{1}(s,k)>0\), such that \[E_{m}^{s}(f)\leq c_{1}\cdot\sum_{\ell=1}^{s}\frac{1}{m^{k}}\cdot\omega_{\ell} \left(\frac{1}{m}\right)\] for all \(m\in\mathbb{N}\) and \(f\in C_{2\pi}^{k}(\mathbb{R}^{s};\mathbb{R})\), where \(\omega_{\ell}\) denotes the modulus of continuity of \(\frac{\partial^{k}f}{\partial x_{\ell}^{k}}\), where we have the trivial bound \[\omega_{\ell}\left(\frac{1}{m}\right)\leq 2\cdot\|f\|_{C^{k}\left([-\pi,\pi]^{s}; \mathbb{R}\right)}\,.\] Hence, we get \[E_{m}^{s}(f)\leq c_{1}\cdot s\cdot 2\cdot\|f\|_{C^{k}\left([-\pi,\pi]^{s}; \mathbb{R}\right)}\,\frac{1}{m^{k}}.\] so the claim follows by choosing \(c:=2s\cdot c_{1}\). **Theorem B.9**.: _Let \(s\in\mathbb{N}\). Then there is a constant \(c=c(s)>0\), such that the operator \(v_{m}\) from Proposition B.7 satisfies_ \[\left\|f-v_{m}(f)\right\|_{L^{\infty}\left(\mathbb{R}^{s}\right)}\leq c\cdot E _{m}^{s}(f)\] _for any \(m\in\mathbb{N}\) and \(f\in C_{2\pi}\left(\mathbb{R}^{s};\mathbb{C}\right)\)._ Proof.: For any \(T\in\mathbb{H}_{m}^{s}\) one has \[\left\|f-v_{m}(f)\right\|_{L^{\infty}\left(\mathbb{R}^{s}\right)}\overset{ \eqref{eq:C2}}{\leq}\left\|f-T\right\|_{L^{\infty}\left(\mathbb{R}^{s}\right) }+\left\|v_{m}(T)-v_{m}(f)\right\|_{L^{\infty}\left(\mathbb{R}^{s}\right)} \overset{\eqref{eq:C2}}{\leq}\left(c+1\right)\left\|f-T\right\|_{L^{\infty} \left(\mathbb{R}^{s}\right)}.\qed\] By combining Proposition B.8 and Theorem B.9, we get the following bound. **Corollary B.10**.: _Let \(s\in\mathbb{N},k\in\mathbb{N}_{0}\). Then there is a constant \(c=c(s,k)>0\), such that_ \[\left\|f-v_{m}(f)\right\|_{L^{\infty}\left(\mathbb{R}^{s}\right)}\leq\frac{c} {m^{k}}\cdot\|f\|_{C^{k}\left(\mathbb{R}^{s};\mathbb{R}\right)}\] _for every \(m\in\mathbb{N}\) and \(f\in C_{2\pi}^{k}\left(\mathbb{R}^{s};\mathbb{R}\right)\)._ **Lemma B.11**.: _Let \(k\in\mathbb{N}_{0}\) and \(s\in\mathbb{N}\). For any function \(f\in C^{k}\left([-1,1]^{s};\mathbb{C}\right)\) we define the corresponding periodic function via_ \[f^{*}:\quad\mathbb{R}^{s}\to\mathbb{C},\quad f^{*}\left(x_{1},...,x_{s}\right) =f(\cos\left(x_{1}\right),...,\cos\left(x_{s}\right))\] _and note \(f^{*}\in C_{2\pi}^{k}\left(\mathbb{R}^{s};\mathbb{C}\right)\). The map_ \[C^{k}\left([-1,1]^{s};\mathbb{C}\right)\to C_{2\pi}^{k}\left(\mathbb{R}^{s}; \mathbb{C}\right),\quad f\mapsto f^{*}\] _is a continuous linear operator with respect to the \(C^{k}\)-norms on \(C^{k}\left([-1,1]^{s};\mathbb{C}\right)\) and \(C_{2\pi}^{k}\left(\mathbb{R}^{s};\mathbb{C}\right)\)._ Proof.: The map is well-defined since \(\cos\) is a smooth function and \(2\pi\)-periodic. The linearity of the operator follows directly so it remains to show its continuity. The goal is to apply the closed graph theorem [13, Theorem 5.12]. By definition, we have the equality \(\|f\|_{L^{\infty}\left([-1,1]^{s};\mathbb{C}\right)}=\|f^{*}\|_{L^{\infty}\left( [-\pi,\pi]^{s};\mathbb{C}\right)}\). Let then \(\left(f_{n}\right)_{n\in\mathbb{N}}\) be a sequence of functions \(f_{n}\in C^{k}\left([-1,1]^{s};\mathbb{C}\right)\) and \(g^{*}\in C_{2\pi}^{k}\left(\mathbb{R}^{s};\mathbb{C}\right)\) such that \(f_{n}\to f\) in \(C^{k}\left([-1,1]^{s};\mathbb{C}\right)\) and \(f_{n}^{*}\to g^{*}\) in \(C_{2\pi}^{k}\left(\mathbb{R}^{s};\mathbb{C}\right)\). We then have \[\|f^{*}-g^{*}\|_{L^{\infty}\left([-\pi,\pi]^{s}\right)} \leq\|f^{*}-f_{n}^{*}\|_{L^{\infty}\left([-\pi,\pi]^{s}\right)}+\| f_{n}^{*}-g^{*}\|_{L^{\infty}\left([-\pi,\pi]^{s}\right)}\] \[\leq\|f-f_{n}\|_{L^{\infty}\left([-1,1]^{s};\mathbb{C}\right)}+\| f_{n}^{*}-g^{*}\|_{L^{\infty}\left([-\pi,\pi]^{s}\right)}\] \[\leq\|f-f_{n}\|_{C^{k}\left([-1,1]^{s};\mathbb{C}\right)}+\|f_{n}^ {*}-g^{*}\|_{C^{k}\left(\mathbb{R}^{s};\mathbb{C}\right)}\to 0\,\,(n\to\infty).\] It follows \(f^{*}=g^{*}\) and the closed graph theorem yields the desired continuity. The following identity is a generalization of the well-known product-to-sum formula for \(\cos\). **Lemma B.12**.: _Let \(s\in\mathbb{N}\). Then it holds for any \(x\in\mathbb{R}^{s}\) that_ \[\prod_{j=1}^{s}\cos(x_{j})=\frac{1}{2^{s}}\sum_{\sigma\in\{-1,1\}^{s}}\cos( \langle\sigma,x\rangle).\] Proof.: This is an inductive generalization of the product-to-sum formula \[2\cos(x)\cos(y)=\cos(x-y)+\cos(x+y)\] (B.4) for \(x,y\in\mathbb{R}\), which can be found for instance in [1, p. 4.3.32]. The case \(s=1\) follows since \(\cos\) is an even function. Assume that the claim holds for a fixed \(s\in\mathbb{N}\) and take \(x\in\mathbb{R}^{s+1}\). Writing \(x^{\prime}=(x_{1},...,x_{s})\) we derive \[\prod_{j=1}^{s+1}\cos(x_{j}) =\left(\frac{1}{2^{s}}\sum_{\sigma\in\{-1,1\}^{s}}\cos(\langle \sigma,x^{\prime}\rangle)\right)\cdot\cos(x_{s+1})\] \[=\frac{1}{2^{s}}\sum_{\sigma\in\{-1,1\}^{s}}\cos(\langle\sigma,x ^{\prime}\rangle)\cos(x_{s+1})\] \[\stackrel{{\eqref{eq:1}}}{{=}}\frac{1}{2^{s+1}} \sum_{\sigma\in\{-1,1\}^{s}}\left[\cos(\langle\sigma,x^{\prime}\rangle+x_{s+1 })+\cos(\langle\sigma,x^{\prime}\rangle-x_{s+1})\right]\] \[=\frac{1}{2^{s+1}}\sum_{\sigma\in\{-1,1\}^{s+1}}\cos(\langle \sigma,x\rangle),\] as was to be shown. **Proposition B.13**.: _For any \(f\in C^{k}\left([-1,1]^{s};\mathbb{C}\right)\) and \(m\in\mathbb{N}\) the de-la-Vallee-Poussin operator given as \(f\mapsto v_{m}\left(f^{*}\right)\) has a representation_ \[v_{m}\left(f^{*}\right)(x_{1},...,x_{s})=\sum_{\begin{subarray}{c}\mathbf{k}\in \mathbb{N}_{0}^{s}\\ \mathbf{k}\leq 2m-1\end{subarray}}\mathcal{V}_{\mathbf{k}}^{m}(f)\prod_{j=1}^{s}\cos \left(\mathbf{k}_{j}x_{j}\right)\] _for continuous linear functionals_ \[\mathcal{V}_{\mathbf{k}}^{m}:\ C^{k}\left([-1,1]^{s};\mathbb{C}\right)\to\mathbb{ C}.\] _Furthermore, if \(f\in C^{k}([-1,1]^{s};\mathbb{R})\) we have \(\mathcal{V}_{\mathbf{k}}^{m}(f)\in\mathbb{R}\) for every \(\mathbf{k}\in\mathbb{N}_{0}^{s}\) with \(\mathbf{k}\leq 2m-1\)._ Proof.: First of all, it is easy to see that \(v_{m}\left(f^{*}\right)\) is even in every variable, which follows directly from the fact that \(f^{*}\) and \(V_{m}^{s}\) are both even in each variable. Furthermore, if we write \[V_{m}^{s}=\sum_{\begin{subarray}{c}\mathbf{k}\in\mathbb{Z}^{s}\\ -(2m-1)\leq\mathbf{k}\leq 2m-1\end{subarray}}a_{\mathbf{k}}^{m}e_{\mathbf{k}}\] with appropriately chosen coefficients \(a_{\mathbf{k}}^{m}\in\mathbb{R}\), we easily see \[v_{m}\left(f^{*}\right)=\sum_{\begin{subarray}{c}\mathbf{k}\in\mathbb{Z}^{s}\\ -(2m-1)\leq\mathbf{k}\leq 2m-1\end{subarray}}a_{\mathbf{k}}^{m}\widehat{f^{*}}(\mathbf{k})e_{ \mathbf{k}}.\] Using Euler's identity and the fact that \(v_{m}\left(f^{*}\right)\) is an even function, we get the representation \[v_{m}\left(f^{*}\right)(x)=\sum_{\left|\mathbf{k}\right|\leq 2m-1}a_{\mathbf{k}}^{m} \widehat{f^{*}}(\mathbf{k})\text{cos}(\langle\mathbf{k},x\rangle)\] for any \(x\in\mathbb{R}^{s}\). Using \(\cdot\) to denote the componentwise product of two vectors of the same size, we see since \(v_{m}\left(f^{*}\right)\) is even in every variable that \[v_{m}\left(f^{*}\right)\left(x\right) =\frac{1}{2^{s}}\cdot\sum_{\sigma\in\{-1,1\}^{s}}v_{m}\left(f^{*} \right)\left(\sigma\cdot x\right)\] \[=\sum_{\begin{subarray}{c}\mathbf{k}\in\mathbb{Z}^{s}\\ -(2m-1)\leq\mathbf{k}\leq 2m-1\end{subarray}}\left(a_{\mathbf{k}}^{m} \widehat{f^{*}}(\mathbf{k})\frac{1}{2^{s}}\sum_{\sigma\in\{-1,1\}^{s}}\cos( \left\langle\mathbf{k},\sigma\cdot x\right\rangle)\right)\] \[\stackrel{{\text{Lemma B.12}}}{{=}}\sum_{ \begin{subarray}{c}\mathbf{k}\in\mathbb{Z}^{s}\\ -(2m-1)\leq\mathbf{k}\leq 2m-1\end{subarray}}a_{\mathbf{k}}^{m} \widehat{f^{*}}(\mathbf{k})\prod_{j=1}^{s}\cos\left(\mathbf{k}_{j}x_{j}\right)\] \[=\sum_{\begin{subarray}{c}\mathbf{k}\in\mathbb{N}_{0}^{s}\\ \mathbf{k}\leq 2m-1\end{subarray}}2^{\left\|\mathbf{k}\right\|_{0}}a_{ \mathbf{k}}^{m}\widehat{f^{*}}(\mathbf{k})\prod_{j=1}^{s}\cos\left(\mathbf{k} _{j}x_{j}\right)\] with \[\left\|\mathbf{k}\right\|_{0}:=\#\big{\{}j\in\{1,...,s\}:\ \mathbf{k}_{j} \neq 0\big{\}}.\] In the last step we again used that \(\cos\) is an even function and that \[\widehat{f^{*}}(\mathbf{k})=\widehat{f^{*}}(\sigma\cdot\mathbf{k})\] for all \(\sigma\in\{-1,1\}^{s}\), which also follows easily since \(f^{*}\) is even in every component. Letting \[\mathcal{V}_{\mathbf{k}}^{m}(f):=2^{\left\|\mathbf{k}\right\|_{0}}a_{ \mathbf{k}}^{m}\widehat{f^{*}}(\mathbf{k})\] we have the desired form. The fact that \(\mathcal{V}_{\mathbf{k}}^{m}\) is a continuous linear functional on \(C_{2\pi}^{k}\left([-1,1]^{s};\mathbb{C}\right)\) follows directly since \(f\mapsto\widehat{f^{*}}(\mathbf{k})\) is a continuous linear functional for every \(\mathbf{k}\). If \(f\) is real-valued, so is \(\widehat{f^{*}}(\mathbf{k})\) for every \(\mathbf{k}\in\mathbb{N}_{0}^{s}\) with \(\mathbf{k}\leq 2m-1\), since \(f^{*}\) is real-valued and even in every component. **Lemma B.14**.: _Let \(s\in\mathbb{N}\). There is a constant \(c=c(s)>0\), such that the inequality_ \[\sum_{\begin{subarray}{c}\mathbf{k}\in\mathbb{N}_{0}^{s}\\ \mathbf{k}\leq 2m-1\end{subarray}}\left|\mathcal{V}_{\mathbf{k}}^{m}(f) \right|\leq c\cdot m^{s/2}\cdot\left\|f\right\|_{L^{\infty}\left([-1,1]^{s}; \mathbb{C}\right)}\] _holds for any \(m\in\mathbb{N}\) and \(f\in C\left([-1,1]^{s};\mathbb{C}\right)\), where \(\mathcal{V}_{\mathbf{k}}^{m}\) is as in Proposition B.13._ Proof.: Let \(f\in C\left([-1,1]^{s};\mathbb{C}\right)\) and \(m\in\mathbb{N}\). For any multi-index \(\boldsymbol{\ell}\in\mathbb{N}_{0}^{s}\) one has \[\widehat{v_{m}\left(f^{*}\right)}(\boldsymbol{\ell})=\sum_{ \begin{subarray}{c}\mathbf{k}\in\mathbb{N}_{0}^{s}\\ \mathbf{k}\leq 2m-1\end{subarray}}\mathcal{V}_{\mathbf{k}}^{m}(f)\widehat{g_{ \mathbf{k}}}(\boldsymbol{\ell}),\] with \[g_{\mathbf{k}}:\quad\mathbb{R}^{s}\rightarrow\mathbb{R},\quad \left(x_{1},...,x_{s}\right)\mapsto\prod_{j=1}^{s}\cos\left(\mathbf{k}_{j}x_{j }\right).\] Now, a calculation using Fubini's theorem and using \(g_{k}=\frac{1}{2}\left(e_{k}+e_{-k}\right)\) for any number \(k\in\mathbb{N}_{0}\) shows \[\widehat{g_{\mathbf{k}}}(\boldsymbol{\ell})=\begin{cases}\frac{1}{2^{\left\| \mathbf{k}\right\|_{0}}},&\mathbf{k}=\boldsymbol{\ell},\\ 0,&\text{otherwise}\end{cases}\quad\text{ for }\mathbf{k},\boldsymbol{\ell}\in \mathbb{N}_{0}^{s}.\] Therefore, we have the bound \(|\mathcal{V}_{\mathbf{\ell}}^{m}(f)|\leq 2^{s}\cdot\left|\widehat{v_{m}\left(f^{s} \right)}(\mathbf{\ell})\right|\) for \(\mathbf{\ell}\in\mathbb{N}_{0}^{s}\) with \(|\mathbf{\ell}|\leq 2m-1\). Using Cauchy-Schwarz' and Parseval's inequality one sees \[\sum_{\begin{subarray}{c}\mathbf{k}\in\mathbb{N}_{0}^{s}\\ \mathbf{k}\leq 2m-1\end{subarray}}|\mathcal{V}_{\mathbf{k}}^{m}(f)| \leq 2^{s}\cdot\sum_{\begin{subarray}{c}\mathbf{k}\in\mathbb{N}_{0}^{s} \\ \mathbf{k}\leq 2m-1\end{subarray}}\left|\widehat{v_{m}\left(f^{s}\right)}(\mathbf{k} )\right|\overset{\mathrm{CS}}{\leq 2}2^{s}\cdot(2m)^{s/2}\cdot\left(\sum_{ \begin{subarray}{c}\mathbf{k}\in\mathbb{N}_{0}^{s}\\ \mathbf{k}\leq 2m-1\end{subarray}}\left|\widehat{v_{m}\left(f^{s}\right)}(\mathbf{k} )\right|^{2}\right)^{1/2}\] \[\overset{\mathrm{Parseval}}{\leq 2^{s}\cdot 2^{s/2}\cdot m^{s/2}\cdot\|v_{m }\left(f^{s}\right)\|_{L^{2}\left(\left[-\pi,\pi\right]^{s};\mathbb{C}\right)}}\] \[\leq \underbrace{2^{s}\cdot 2^{s/2}}_{=:c_{1}(s)}\cdot m^{s/2} \cdot\|v_{m}\left(f^{s}\right)\|_{L^{\infty}\left(\left[-\pi,\pi\right]^{s}; \mathbb{C}\right)}\,.\] Using Proposition B.7 we get \[\sum_{\begin{subarray}{c}\mathbf{k}\in\mathbb{N}_{0}^{s}\\ \mathbf{k}\leq 2m-1\end{subarray}}|\mathcal{V}_{\mathbf{k}}^{m}(f)|\leq c_{1}(s) \cdot c_{2}(s)\cdot m^{s/2}\cdot\|f^{*}\|_{L^{\infty}\left(\left[-\pi,\pi \right]^{s}\right)}=c(s)\cdot m^{s/2}\cdot\|f\|_{L^{\infty}\left(\left[-1,1 \right]^{s};\mathbb{C}\right)}\,,\] as claimed. For any natural number \(\ell\in\mathbb{N}_{0}\), we denote by \(T_{\ell}\) the \(\ell\)-th _Chebyshev polynomial_, satisfying \[T_{\ell}\left(\cos(x)\right)=\cos(\ell x),\quad x\in\mathbb{R}.\] For a multi-index \(\mathbf{k}\in\mathbb{N}_{0}^{s}\), we define \[T_{\mathbf{k}}(x):=\prod_{j=1}^{s}T_{\mathbf{k}_{j}}\left(x_{j}\right),\quad x\in[-1, 1]^{s}.\] We then get the following approximation result. **Theorem B.15**.: _Let \(s,m\in\mathbb{N}\), \(k\in\mathbb{N}_{0}\). Then there is a constant \(c=c(s,k)>0\) with the following property: For any \(f\in C^{k}\left([-1,1]^{s};\mathbb{R}\right)\) the polynomial \(P\) defined as_ \[P(x):=\sum_{\begin{subarray}{c}\mathbf{k}\in\mathbb{N}_{0}^{s}\\ \mathbf{k}\leq 2m-1\end{subarray}}\mathcal{V}_{\mathbf{k}}^{m}(f)\cdot T_{\mathbf{k}}(x)\] _with \(\mathcal{V}_{\mathbf{k}}^{m}\) as in Proposition B.13 satisfies_ \[\left\|f-P\right\|_{L^{\infty}\left([-1,1]^{s};\mathbb{R}\right)}\leq\frac{c} {m^{k}}\cdot\left\|f\right\|_{C^{k}\left([-1,1]^{s};\mathbb{R}\right)}.\] _Here, the maps_ \[C\left([-1,1]^{s};\mathbb{R}\right)\to\mathbb{R},\quad f\mapsto\mathcal{V}_{ \mathbf{k}}^{m}(f)\] _are continuous and linear functionals with respect to the \(L^{\infty}\)-norm. Furthermore, there is a constant \(\widetilde{c}=\widetilde{c}(s)>0\), such that the inequality_ \[\sum_{\begin{subarray}{c}\mathbf{k}\in\mathbb{N}_{0}^{s}\\ \mathbf{k}\leq 2m-1\end{subarray}}|\mathcal{V}_{\mathbf{k}}^{m}(f)|\leq\widetilde{c} \cdot m^{s/2}\cdot\left\|f\right\|_{L^{\infty}\left([-1,1]^{s}\right)}\] _holds for any \(f\in C\left([-1,1]^{s};\mathbb{R}\right)\)._ Proof.: We choose the constant \(c_{0}=c_{0}(s,k)\) according to Corollary B.10. Let \(f\in C^{k}\left([-1,1]^{s};\mathbb{R}\right)\) be arbitrary. Then we define the corresponding function \(f^{*}\in C_{2\pi}^{k}\left(\mathbb{R}^{s};\mathbb{R}\right)\) as above. Let \(P\) be defined as in the statement of the theorem. Then the property \[P^{*}(x)=v_{m}\left(f^{*}\right)(x)\] is satisfied, where \(P^{*}\) is the corresponding function to \(P\) defined similarly to \(f^{*}\). Overall we get the bound \[\left\|f-P\right\|_{L^{\infty}([-1,1]^{s};\mathbb{R})}=\left\|f^{*}-P^{*}\right\| _{L^{\infty}([-\pi,\pi]^{s};\mathbb{R})}\stackrel{{\text{Cor. B.10}}}{{\leq}}\frac{c_{0}}{m^{k}}\cdot\left\|f^{*} \right\|_{C^{k}(\mathbb{R}^{s};\mathbb{R})}.\] The first claim then follows using the continuity of the map \(f\mapsto f^{*}\) as proven in Lemma B.11. The second part of the Theorem has already been proven in Lemma B.14. ## Appendix C Prerequisites for the optimality result In this subsection we prove a very general lower bound for the approximation of functions in \(C^{k}([-1,1]^{s};\mathbb{R})\) using a subset of \(C([-1,1]^{s};\mathbb{R})\) that can be parametrized using a certain amount of parameters. Precisely, we prove a lower bound of \(m^{-k/s}\) where \(m\) is the number of parameters that are used, provided that the selection of the parameters, i.e., the map that maps a function in \(C^{k}([-1,1]^{s};\mathbb{R})\) to the parameters of the approximating function, is continuous with respect to some norm on \(C^{k}([-1,1]^{s};\mathbb{R})\). The proofs are in fact almost identical to what is done in [10]. However, we decided to include a detailed proof in this paper, since [10] considers Sobolev functions and not \(C^{k}\)-functions and since the continuity assumption in [10] is not completely clear. **Proposition C.1** ([10, Theorem 3.1]).: _Let \((X,\|\cdot\|_{X})\) be a normed space, \(\varnothing\neq K\subseteq X\) a subset and \(V\subseteq X\) a linear, not necessarily closed subspace of \(X\) containing K. Let \(m\in\mathbb{N}\), \(\overline{a}:K\to\mathbb{R}^{m}\) be a map which is continuous with respect to some norm \(\|\cdot\|_{V}\) on \(V\) and \(M_{m}:\mathbb{R}^{m}\to X\) some arbitrary map. Let_ \[b_{m}(K)_{X}\mathrel{\mathop{:}}=\sup_{X_{m+1}}\sup\left\{\varrho\geq 0:\ U_{ \varrho}(X_{m+1})\subseteq K\right\},\] _where the first supremum is taken over all \((m+1)\)-dimensional linear subspaces \(X_{m+1}\) of \(X\) and_ \[U_{\varrho}(X_{m+1})\mathrel{\mathop{:}}=\{y\in X_{m+1}:\ \|y\|_{X}\leq \varrho\}.\] _Then it holds_ \[\sup_{x\in K}\|x-M_{m}(\overline{a}(x))\|_{X}\geq b_{m}(K)_{X}.\] Proof.: The claim is trivial if \(b_{m}(K)_{X}=0\). Thus, assume \(b_{m}(K)_{X}>0\). Let \(0<\varrho\leq b_{m}(K)_{X}\) be any number such that there exists an \((m+1)\)-dimensional subspace \(X_{m+1}\) of \(X\) with \(U_{\varrho}(X_{m+1})\subseteq K\). It follows \(U_{\varrho}(X_{m+1})\subseteq V\), hence \(X_{m+1}\subseteq V\), so \(\|\cdot\|_{V}\) defines a norm on \(X_{m+1}\). Thus, the restriction of \(\overline{a}\) to \(\partial U_{\varrho}(X_{m+1})\) is a continuous mapping to \(\mathbb{R}^{m}\) with respect to \(\|\cdot\|_{V}\). Since all norms are equivalent on the finite-dimensional space \(X_{m+1}\), the Borsuk-Ulam-Theorem [9, Corollary 4.2] yields the existence of a point \(x_{0}\in\partial U_{\varrho}(X_{m+1})\) with \(\overline{a}(x_{0})=\overline{a}(-x_{0})\). We then see \[2\varrho =2\|x_{0}\|_{X}=\|x_{0}-M_{m}(\overline{a}(x_{0}))\|_{X}+\|x_{0} +M_{m}(\overline{a}(-x_{0}))\|_{X}\] \[\leq\|x_{0}-M_{m}(\overline{a}(x_{0}))\|_{X}+\|-x_{0}-M_{m}( \overline{a}(-x_{0}))\|_{X},\] and hence, at least one of the two summands on the right has to be larger than or equal to \(\varrho\). Using this very general result we can deduce our lower bound in the context of \(C^{k}\)-spaces. **Theorem C.2**.: _Let \(s,k\in\mathbb{N}\). Then there exists a constant \(c=c(s,k)>0\) with the following property: For any \(m\in\mathbb{N}\) and any map \(\overline{a}:C^{k}([-1,1]^{s};\mathbb{R})\to\mathbb{R}^{m}\) that is continuous with respect to some norm on \(C^{k}([-1,1]^{s};\mathbb{R})\) and any map \(M_{m}:\mathbb{R}^{m}\to C([-1,1]^{s};\mathbb{R})\) we have_ \[\sup_{\begin{subarray}{c}f\in C^{k}([-1,1]^{s};\mathbb{R})\\ \|f\|_{C^{k}([-1,1]^{s};\mathbb{R})}\leq 1\end{subarray}}\|f-M_{m}( \overline{a}(f))\|_{L^{\infty}([-1,1]^{s};\mathbb{R})}\geq c\cdot m^{-k/s}.\] Proof.: The idea is to apply Proposition C.1 to the spaces \(X\mathrel{\mathop{:}}=C([-1,1]^{s};\mathbb{R})\), \(V\mathrel{\mathop{:}}=C^{k}([-1,1]^{s};\mathbb{R})\) and the set \(K\mathrel{\mathop{:}}=\{f\in C^{k}([-1,1]^{s};\mathbb{R}):\ \|f\|_{C^{k}([-1,1]^{s};\mathbb{R})}\leq 1\}\). Assume in the beginning that \(m=n^{s}\) with an integer \(n>1\). Pick \(\phi\in C^{\infty}(\mathbb{R}^{s})\) with \(\phi\equiv 1\) on \([-3/4,3/4]^{s}\) and \(\phi\equiv 0\) outside of \([-1,1]^{s}\). Fix \(c_{0}=c_{0}(s,k)>0\) with \[1\leq\|\phi\|_{C^{k}([-1,1]^{s};\mathbb{R})}\leq c_{0}.\] Let \(Q_{1},...,Q_{m}\) be the partition (disjoint up to null-sets) of \([-1,1]^{s}\) into closed cubes of sidelength \(2/n\). For every \(j\in\{1,...,m\}\) write \(Q_{j}=\mathsf{\times}_{\ell=1}^{s}[a_{\ell}-1/n,a_{\ell}+1/n]\) with some vector \(a=(a_{1},...,a_{s})\in[-1,1]^{s}\) and let \[\phi_{j}(x):=\phi(nx-na)\text{ for }x\in\mathbb{R}^{s}.\] By choice of \(\phi\), the maps \(\phi_{j}\) are supported on a proper subset of \(Q_{j}\) for every \(j\in\{1,...,m\}\) and an inductive argument shows \[\partial^{\mathbf{k}}\phi_{j}(x)=n^{|\mathbf{k}|}\cdot\partial^{\mathbf{k}} \phi(nx-na)\text{ for every }\mathbf{k}\in\mathbb{N}_{0}^{s}\text{ and }x\in\mathbb{R}^{s}\] and hence in particular \[\|\phi_{j}\|_{C^{\mathbf{k}}([-1,1]^{s};\mathbb{R})}\leq n^{|\mathbf{k}|} \cdot c_{0}.\] (C.1) Let \(X_{m}:=\operatorname{span}\{\phi_{1},...,\phi_{m}\}\) and \(S\in U(X_{m})\). Then we can write \(S\) in the form \(S=\sum_{j=1}^{m}c_{j}\phi_{j}\) with real numbers \(c_{1},...,c_{m}\in\mathbb{R}\). Suppose there is \(j^{*}\in\{1,...,m\}\) with \(|c_{j^{*}}|>1\). Then we have \[\|S\|_{L^{\infty}([-1,1]^{s};\mathbb{R})}\geq\underbrace{|c_{j^{*}}|}_{>1} \cdot\underbrace{\|\phi_{j^{*}}\|_{L^{\infty}([-1,1]^{s};\mathbb{R})}}_{=1}>1,\] since the functions \(\phi_{j}\) have disjoint support. This is a contradiction to \(S\in U(X_{m})\) and we can thus infer that \(\max\limits_{j}|c_{j}|\leq 1\). Furthermore, we see again because the functions \(\phi_{j}\) have disjoint support that \[\|\partial^{\mathbf{k}}S\|_{L^{\infty}([-1,1]^{s};\mathbb{R})}=\max\limits_{j }\ |c_{j}|\cdot\|\partial^{\mathbf{k}}\phi_{j}\|_{L^{\infty}([-1,1]^{s}; \mathbb{R})}\stackrel{{\eqref{eq:c_1}}}{{\leq}}n^{|\mathbf{k}|} \cdot c_{0}\leq c_{0}\cdot n^{k}=c_{0}\cdot m^{k/s}\] for every \(\mathbf{k}\in\mathbb{N}_{0}^{s}\) with \(|\mathbf{k}|\leq k\) and hence \[\|S\|_{C^{k}([-1,1]^{s};\mathbb{R})}\leq c_{0}\cdot m^{k/s}.\] Thus, letting \(\varrho:=c_{0}^{-1}\cdot m^{-k/s}\) yields \(U_{\varrho}(X_{m})\subseteq K\), so we see by Proposition C.1 that \[\sup\limits_{f\in K}\|f-M_{m-1}(\overline{a}(f))\|_{L^{\infty}([-1,1]^{s}; \mathbb{R})}\geq\varrho=c_{1}\cdot m^{-k/s}\] with \(c_{1}=c_{0}^{-1}\) for every map \(\overline{a}:X\to\mathbb{R}^{m-1}\) which is continuous with respect to some norm on \(V\) and any map \(M_{m-1}:\mathbb{R}^{m-1}\to X\). Using the inequality \(m\leq 2(m-1)\) (note \(m>1\)) we get \[\sup\limits_{f\in K}\|f-M_{m-1}(\overline{a}(f))\|_{L^{\infty}([-1,1]^{s}; \mathbb{R})}\geq c_{1}\cdot m^{-k/s}\geq c_{1}\cdot(2(m-1))^{-k/s}\geq c_{2} \cdot(m-1)^{-k/s}\] with \(c_{2}=c_{1}\cdot 2^{-k/s}\). Hence, the claim has been shown for all numbers \(m\) of the form \(n^{s}-1\) with an integer \(n>1\). In the end, let \(m\in\mathbb{N}\) be arbitrary and pick \(n\in\mathbb{N}\) with \(n^{s}\leq m<(n+1)^{s}\). For given maps \(\overline{a}:V\to\mathbb{R}^{m}\) and \(M_{m}:\mathbb{R}^{m}\to X\) with \(\overline{a}\) continuous with respect to some norm on \(V\), let \[\widetilde{a}:\quad V\to\mathbb{R}^{(n+1)^{s}-1},\quad f\mapsto(\overline{a} (f),0)\quad\text{and}\quad M_{(n+1)^{s}-1}:\quad\mathbb{R}^{(n+1)^{s}-1}\to X,\quad(x,y)\mapsto M_{m}(x),\] where \(x\in\mathbb{R}^{m},\ y\in\mathbb{R}^{(n+1)^{s}-1-m}\). Then we get \[\sup\limits_{f\in K}\|f-M_{m}(\overline{a}(f))\|_{L^{\infty}([-1,1 ]^{s};\mathbb{R})} =\sup\limits_{f\in K}\|f-M_{(n+1)^{s}-1}(\widetilde{a}(f))\|_{L^{ \infty}([-1,1]^{s};\mathbb{R})}\geq c_{2}\cdot((n+1)^{s}-1)^{-k/s}\] \[\geq c_{2}\cdot(2^{s}n^{s})^{-k/s}\geq c_{3}\cdot m^{-k/s}\] with \(c_{3}=c_{2}\cdot 2^{-k}\). Here we used the bound \((n+1)^{s}-1\leq(2n)^{s}\). This proves the full claim. ## Appendix D Approximation using Ridge Functions In this section we prove for \(s\in\mathbb{N}^{\geq 2}\) that every function in \(C^{k}([-1,1]^{s};\mathbb{R})\) can be uniformly approximated with an error of the order \(m^{-k/(s-1)}\) using a linear combination of \(m\) so-called _ridge functions_. In fact, we only consider ridge _polynomials_, meaning functions of the form \[\mathbb{R}^{s}\to\mathbb{R},\quad x\mapsto p(a^{T}x)\] for a fixed vector \(a\in\mathbb{R}^{s}\) and a polynomial \(p:\mathbb{R}\to\mathbb{R}\). Note that this result has already been obtained in a slightly different form in [20, Theorem 1.1]; namely, it is shown there that the rate of approximation \(m^{-k/(s-1)}\) can be achieved by functions of the form \(\sum_{j=1}^{m}f_{j}(a_{j}^{T}x)\) with \(a_{j}\in\mathbb{R}^{s}\) and \(f_{j}\in L^{1}_{\text{loc}}(\mathbb{R}^{s})\). We will need the fact that the \(f_{j}\) can actually be chosen as polynomials and that the vectors \(a_{1},...,a_{m}\) can be chosen independently from the particular function \(f\). This is shown in the proof of [20], but not stated explicitly. For this reason, and in order to clarify the proof itself and to make the paper more self-contained, we decided to present the proof in this appendix. **Lemma D.1**.: _Let \(m,s\in\mathbb{N}\). Then we denote_ \[P_{m}^{s}:=\left\{\mathbb{R}^{s}\to\mathbb{R},\quad x\mapsto\sum_{ \begin{subarray}{c}\mathbf{k}\in\mathbb{N}_{0}^{s}\\ |\mathbf{k}|\leq m\end{subarray}}a_{\mathbf{k}}x^{\mathbf{k}}:\ a_{\mathbf{k}}\in\mathbb{R}\right\}\] _as the set of real polynomials of degree at most \(m\). The subset of homogeneous polynomials of degree \(m\) is defined as_ \[H_{m}^{s}:=\left\{\mathbb{R}^{s}\to\mathbb{R},\quad x\mapsto\sum_{ \begin{subarray}{c}\mathbf{k}\in\mathbb{N}_{0}^{s}\\ |\mathbf{k}|=m\end{subarray}}a_{\mathbf{k}}x^{\mathbf{k}}:\ a_{\mathbf{k}}\in\mathbb{R}\right\}.\] _There is a constant \(c=c(s)>0\) satisfying_ \[\dim(H_{m}^{s})\leq c\cdot m^{s-1}\quad\forall m\in\mathbb{N}.\] Proof.: It is immediate that the set \[\left\{\mathbb{R}^{s}\to\mathbb{R},\ x\mapsto x^{\mathbf{k}}:\ \mathbf{k}\in\mathbb{N}_{0}^{s},\ |\mathbf{k}|=m\right\}\] forms a basis of \(H_{m}^{s}\), hence \[\dim(H_{m}^{s})=\#\left\{\mathbf{k}\in\mathbb{N}_{0}^{s}:\ |\mathbf{k}|=m\right\}.\] This quantity clearly equals the number of possibilities for drawing \(m\) times from a set with \(s\) elements with replacement. Hence, we see \[\dim(H_{m}^{s})=\binom{s+m-1}{m},\] see for instance [7, Identity 143]. A further estimation shows \[\binom{s+m-1}{m}=\prod_{j=1}^{s-1}\frac{m+j}{j}=\prod_{j=1}^{s-1}\left(1+\frac {m}{j}\right)\leq(1+m)^{s-1}\leq 2^{s-1}\cdot m^{s-1}.\] Hence, the claim follows with \(c(s)=2^{s-1}\). A combination of results from [25] together with the fact that it is possible to approximate \(C^{k}\)-functions using polynomials of degree at most \(m\) with an error of the order \(m^{-k}\), as shown in Theorem B.15, yields the desired result. **Theorem D.2**.: _Let \(s,k\in\mathbb{N}\) with \(s\geq 2\) and \(r>0\). Then there exists a constant \(c=c(s,k)>0\) with the following property: For every \(m\in\mathbb{N}\) there are \(a_{1},...,a_{m}\in\mathbb{R}^{s}\setminus\{0\}\) with \(\|a_{j}\|=r\), such that for every function \(f\in C^{k}([-1,1]^{s};\mathbb{R})\) there are polynomials \(p_{1},...,p_{m}\in P_{m}^{1}\) satisfying_ \[\left\|f(x)-\sum_{j=1}^{m}p_{j}(a_{j}^{T}x)\right\|_{L^{\infty}([-1,1]^{s}; \mathbb{R})}\leq c\cdot m^{-k/(s-1)}\cdot\|f\|_{C^{k}([-1,1]^{s};\mathbb{R})}.\] Proof.: We first pick the constant \(c_{1}=c_{1}(s)\) according to Lemma D.1. Then we define the constant \(c_{2}=c_{2}(s):=(2s)^{s-1}\cdot c_{1}(s)\) and let \(M\in\mathbb{N}\) be the largest integer satisfying \[c_{2}\cdot M^{s-1}\leq m.\] Here, we assume without loss of generality that \(m\geq c_{2}\), which can be justified by choosing \(p_{j}=0\) for every \(j\in\{1,...,m\}\) if \(m<c_{2}\), at the cost of possibly enlarging \(c\). Note that the choice of \(M\) implies \(c_{2}\cdot(2M)^{s-1}\geq c_{2}\cdot(M+1)^{s-1}>m\) and thus \[M\geq\frac{1}{2}\cdot c_{2}^{-1/(s-1)}\cdot m^{1/(s-1)}=c_{3}\cdot m^{1/(s-1)}\] (D.1) with \(c_{3}=c_{3}(s):=1/2\cdot c_{2}^{-1/(s-1)}\). Using [25, Proposition 5.9] and Lemma D.1 we can pick \(a_{1},...,a_{m}\in\mathbb{R}^{s}\setminus\{0\}\) satisfying \[H_{s(2M-1)}^{s}=\operatorname{span}\left\{x\mapsto(a_{j}^{T}x)^{s(2M-1)}:\ j \in\{1,...,m\}\right\},\] (D.2) where we used that \[c_{1}\cdot(s(2M-1))^{s-1}\leq c_{1}\cdot(2s)^{s-1}\cdot M^{s-1}=c_{2}\cdot M^{ s-1}\leq m.\] Here we can assume \(\|a_{j}\|=r\) for every \(j\in\{1,...,m\}\) since multiplying each \(a_{j}\) with a positive constant does not change the span in (D.2). From [25, Corollary 5.12] we infer that \[P_{s(2M-1)}^{s}=\operatorname{span}\left\{x\mapsto(a_{j}^{T}x)^{r}:\ j\in\{1,...,m\},\ 0\leq r\leq s(2M-1)\right\}.\] (D.3) Let \(f\in C^{k}([-1,1]^{s};\mathbb{R})\). Then, according to Theorem B.15, there is a polynomial \(P:\mathbb{R}^{s}\to\mathbb{R}\) of _coordinatewise_ degree at most \(2M-1\) satisfying \[\|f-P\|_{L^{\infty}([-1,1]^{s};\mathbb{R})}\leq c_{4}\cdot M^{-k}\cdot\|f\|_{ C^{k}([-1,1]^{s};\mathbb{R})},\] where \(c_{4}=c_{4}(s,k)>0\). Note that by construction it holds \(P\in P_{s(2M-1)}^{s}\). Using (D.3) we deduce the existence of polynomials \(p_{1},...,p_{m}:\mathbb{R}\to\mathbb{R}\) such that \[P(x)=\sum_{j=1}^{m}p_{j}(a_{j}^{T}x)\quad\text{for all }x\in\mathbb{R}^{s}.\] Combining the previously shown bounds we get \[\left\|f(x)-\sum_{j=1}^{m}p_{j}(a_{j}^{T}x)\right\|_{L^{\infty}([-1,1]^{s}; \mathbb{R})} =\|f(x)-P(x)\|_{L^{\infty}([-1,1]^{s};\mathbb{R})}\leq c_{4}\cdot M ^{-k}\cdot\|f\|_{C^{k}([-1,1]^{s};\mathbb{R})}\] \[\stackrel{{\eqref{eq:P_1}}}{{\leq}}c\cdot m^{-k/(s-1 )}\cdot\|f\|_{C^{k}([-1,1]^{s};\mathbb{R})},\] as desired. Here, we define \(c=c(s,k):=c_{4}\cdot c_{3}^{-k}\). **Acknowledgements.** PG acknowledges support by the German Science Foundation (DFG) in the context of the Emmy Noether junior research group VO 2594/1-1. FV is grateful to Hrushikesh Mhaskar for several helpful and interesting discussions.
2304.03507
Distributional Signals for Node Classification in Graph Neural Networks
In graph neural networks (GNNs), both node features and labels are examples of graph signals, a key notion in graph signal processing (GSP). While it is common in GSP to impose signal smoothness constraints in learning and estimation tasks, it is unclear how this can be done for discrete node labels. We bridge this gap by introducing the concept of distributional graph signals. In our framework, we work with the distributions of node labels instead of their values and propose notions of smoothness and non-uniformity of such distributional graph signals. We then propose a general regularization method for GNNs that allows us to encode distributional smoothness and non-uniformity of the model output in semi-supervised node classification tasks. Numerical experiments demonstrate that our method can significantly improve the performance of most base GNN models in different problem settings.
Feng Ji, See Hian Lee, Kai Zhao, Wee Peng Tay, Jielong Yang
2023-04-07T06:54:42Z
http://arxiv.org/abs/2304.03507v1
# Distributional Signals for Node Classification in Graph Neural Networks ###### Abstract In graph neural networks (GNNs), both node features and labels are examples of graph signals, a key notion in graph signal processing (GSP). While it is common in GSP to impose signal smoothness constraints in learning and estimation tasks, it is unclear how this can be done for discrete node labels. We bridge this gap by introducing the concept of distributional graph signals. In our framework, we work with the distributions of node labels instead of their values and propose notions of smoothness and non-uniformity of such distributional graph signals. We then propose a general regularization method for GNNs that allows us to encode distributional smoothness and non-uniformity of the model output in semi-supervised node classification tasks. Numerical experiments demonstrate that our method can significantly improve the performance of most base GNN models in different problem settings. Graph neural networks, graph signal processing, distributional signals, node classification, smoothness ## I Introduction We consider the semi-supervised node classification problem [1] that determines class labels of nodes in graphs given sample observations and possibly node features. Numerous graph neural network (GNN) models have been proposed to tackle this problem. One of the first models is the graph convolutional network (GCN) [2]. Interpreted geometrically, a GCN aggregates information such as node features from the neighborhood of each node of the graph. Algebraically, this process is equivalent to applying a graph convolution filter to node feature vectors. Subsequently, many GNN models with different considerations are introduced. Popular models include the graph attention network (GAT) [3] that learns weights between pairs of nodes during aggregation, and the hyperbolic graph convolutional neural network (HGCN) [4] that considers embedding of nodes of a graph in a hyperbolic space instead of a Euclidean space. For inductive learning, GraphSAGE [5] is proposed to generate low-dimensional vector representations for nodes that are useful for graphs with rich node attribute information. While new models draw inspiration from GCN, GCN itself is built upon the foundation of graph signal processing (GSP). GSP is a signal processing framework that handles graph-structured data [6, 7, 8]. A graph signal is a vector with each component corresponding to a node of a graph. Examples include node features and node labels. Moreover, convolutions used in models such as GCN are special cases of convolution filters in GSP [6]. All these show the close connections between GSP theory and GNNs. In GSP, signal smoothness (over the graph) is widely used to regularize inference tasks. Intuitively, a signal is smooth if its values are similar at each pair of nodes connected by an edge. One popular way to formally define signal smoothness is to use the Laplacian quadratic form. There are numerous GSP tools that leverage a smooth prior of the graph signals. For example, Laplacian (Tikhonov) regularization is proposed for noise removal in [6] and signal interpolation [9]. In [10], it is used in graph signal in-painting and anomaly detection. In [11], the same technique is used for graph topology inference. However, for GNNs, it is remarked in [12, Section 4.1.2] that "graph Laplacian regularization can hardly provide extra information that existing GNNs cannot capture". Therefore, a regularization scheme based on feature propagation is proposed. It is demonstrated to be effective by comparing with other methods such as [13] and [14] based on adversarial learning and [15] that co-trains GNN models with an additional agreement model, which gives the probability that two nodes have the same label. We partially agree with the above assertion regarding graph Laplacian regularization, while remaining reservative about its full correctness. In this paper, we propose a method that is inspired by Laplacian regularization. As our main contribution, we introduce the notion of distributional graph signals, instead of considering graph signals. Analogous to the graph signal smoothness defined using graph Laplacian, we define the smoothness of distributional graph signals. Together with another property known as non-uniformity, we devise a regularization scheme for GNNs in node classification tasks. This approach is easy to implement and can be used as a plug-in regularization term together with any given base GNN model. Its effectiveness is demonstrated with numerical results. The rest of the paper is organized as follows. We formally introduce the notion of distributional graph signals in Section II. In Section II-B, we motivate by analyzing step graph signals naturally occurring in node classification problems. We study two key properties, namely smoothness and non-uniformity, of distributional graph signals in Section III-A and Section III-B. Based on the discussions, we describe the proposed regularization scheme in Section III-C. In Section III-D, we analyze the model and discuss its relations with related works. We present experimental results in Section IV and conclude in Section V. ## II Distributional graph signals In this section, we motivate and introduce distributional graph signals based on GSP theory.
2302.03851
ED-Batch: Efficient Automatic Batching of Dynamic Neural Networks via Learned Finite State Machines
Batching has a fundamental influence on the efficiency of deep neural network (DNN) execution. However, for dynamic DNNs, efficient batching is particularly challenging as the dataflow graph varies per input instance. As a result, state-of-the-art frameworks use heuristics that result in suboptimal batching decisions. Further, batching puts strict restrictions on memory adjacency and can lead to high data movement costs. In this paper, we provide an approach for batching dynamic DNNs based on finite state machines, which enables the automatic discovery of batching policies specialized for each DNN via reinforcement learning. Moreover, we find that memory planning that is aware of the batching policy can save significant data movement overheads, which is automated by a PQ tree-based algorithm we introduce. Experimental results show that our framework speeds up state-of-the-art frameworks by on average 1.15x, 1.39x, and 2.45x for chain-based, tree-based, and lattice-based DNNs across CPU and GPU.
Siyuan Chen, Pratik Fegade, Tianqi Chen, Phillip B. Gibbons, Todd C. Mowry
2023-02-08T02:56:36Z
http://arxiv.org/abs/2302.03851v1
# ED-Batch: Efficient Automatic Batching of Dynamic Neural Networks ###### Abstract Batching has a fundamental influence on the efficiency of deep neural network (DNN) execution. However, for dynamic DNNs, efficient batching is particularly challenging as the dataflow graph varies per input instance. As a result, state-of-the-art frameworks use heuristics that result in suboptimal batching decisions. Further, batching puts strict restrictions on memory adjacency and can lead to high data movement costs. In this paper, we provide an approach for batching dynamic DNNs based on finite state machines, which enables the automatic discovery of batching policies specialized for each DNN via reinforcement learning. Moreover, we find that memory planning that is aware of the batching policy can save significant data movement overheads, which is automated by a PQ tree-based algorithm we introduce. Experimental results show that our framework speeds up state-of-the-art frameworks by on average 1.15x, 1.39x, and 2.45x for chain-based, tree-based, and lattice-based DNNs across CPU and GPU. Machine Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Learning, Deep Learning, Deep Learning, Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Learning, Deep Learning, Deep Learning, Deep Learning, Learning, Deep Learning, Deep Learning, Deep Learning, Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Deep Learning, Learning, Deep Learning, Deep Learning, Deep training of RL, we design a reward function inspired by a sufficient condition for the optimal batching policy. For the static subgraphs of the dynamic DNN, we take a general approach to optimize it by memory-efficient batching. Our key insight is that the memory operations can be significantly minimized by better planning the inter-tensor memory layouts after batching, which we perform using a novel PQ tree-based (Booth & Lueker, 1976) algorithm that we have designed. In summary, this paper makes the following contributions: * We propose an FSM-based batching algorithm to batch dynamic DNNs that finds near-optimal batching policy. * We design a PQ tree-based algorithm with almost linear complexity to reduce memory copies introduced by dynamic batching. * We compare the performance of ED-Batch with state-of-the-art dynamic DNNs frameworks on eight workloads and achieved on average 1.15x, 1.39x, and 2.45x speedup for chain-based, tree-based, and lattice-based networks across CPU and GPU. We will make the source code for ED-Batch publicly available. ## 2 FSM-based Algorithm for Dynamic Batching In this section, we identify the shortcomings of current batching techniques, propose a new FSM-based dynamic batching algorithm, and the mechanism to learn it by RL. ### Problem Characterization _Dynamic batching_ was initially proposed in TensorFlow Fold (Looks et al., 2017) and DyNet (Neubig et al., 2017b) to enable batched execution of operations for dynamic DNNs. Specifically, given a mini-batch of input instances, dataflow graphs are generated for each of the input instances in the mini-batch and each operation is given a type (indicating operation class, tensor shape, etc.). Upon execution, the runtime identifies batching opportunities within the dataflow graphs by executing operations of the same type together. Therefore, the batching algorithm cannot have high complexity to avoid severe runtime overhead. However, the problem of minimizing the number of launched (batched) kernels is an NP-hard problem with no constant approximation algorithm1, making the batching problem extremely challenging. As a result, the heuristics used for dynamic batching in the current systems often find a suboptimal policy. As shown later (Fig.9), the number of batches executed by current frameworks can be cut by up to 3.27 times. Footnote 1: Proved by reducing from the _shortest common supersequence_ problem (Raihá & Ukkonen, 1981) in Appendix.A.1. Specifically, previous state-of-the-art algorithms use heuristics depending on aggregated graph statistics to guide batching. The _depth-based algorithm_ in Tensorflow Fold (Looks et al., 2017) batches operations with the same type at the same topological depth (the input operation to the network has depth 0). And the _agenda-based algorithm_ in DyNet (Neubig et al., 2017b) executes operations of the type with minimal average topological depth iteratively. However, topological depth cannot always capture the regularity of the dataflow graph and result in sub-optimal batching choices. Fig. 1(a) shows a dataflow graph of the tree-based network, which builds upon the parse tree of a sentence with three types of operations: internal nodes (\(I\)), output nodes (\(O\)), and reduction nodes (\(R\)). The ideal batching policy executes all \(O\) nodes in one batch. However, the depth-based algorithm in Fig. 1(b) executes the \(O\) nodes in four batches because they have different topological depths. For the agenda-based algorithm, consider the scenario when it is deciding the next batch after batching the \(I\) nodes first (Fig. 1(c)). Because the \(O\) nodes have a lower average depth (\(\overline{Depth}=(1+1+1+1+2+3+4)/7=1.85\)) than the \(I\) nodes (\(\overline{Depth}=(1+2+3)/3=2\)), the algorithm will pick the \(O\) nodes for the next batch, resulting in an extra batch. ### FSM-based Dynamic Batching To fully overcome the limitation of specific graph statistics, we found that an FSM-based approach (1) offers the opportunity to specialize for network structure under a rich design space of potential batching policies and (2) can generalize to any number of input instances, as long as they share the same regularity in topology. Shown in Alg.1, the FSM-based dynamic batching approach is an iterative process of choosing the operation type for the next batch. The process differs from the agenda-based Figure 1: Example on current dynamic batching algorithms. algorithm only in how it computes the next type for batching (line 3). During each iteration, the next type is decided by first encoding the current dataflow graph \(G\) into a state \(S=E(G)\), and then using a policy \(\pi\) to map the state into an operation type \(t=\pi(S)\). Then, the operations of type \(t\) on the frontier of the \(G\) form the next batch. After they are executed, these operations are removed from \(G\) and the next iteration begins. For the model in Fig. 1(a), an optimal FSM-based batching policy is shown in Fig. 2(a), where we encode the dataflow graph by the set of types on the frontier. Fig. 2(b) shows the batching process. From iterations 1 to 3, the dataflow graph is encoded into \(S_{2}=\{I,O\}\), thus the policy continues to batch nodes of type \(I=\pi(S_{2})\), avoiding batching the \(O\) nodes as past heuristics would do. At the same time, it is not hard to see that this FSM-based batching policy can be applied to batch multiple input instances of different parse trees. ### Using RL to Learn the FSM As the FSM provides us with the design space for potential batching policies, we need an algorithm to come up with the best batching policy specialized for a given network structure. In ED-Batch, we adopt an RL-based approach for the design-space-exploration and learn the best FSM by a reward function inspired by a sufficient condition for optimal batching. In RL, an agent learns to maximize the accumulated reward by exploring the environment. At time \(t\), the environment is encoded into a state \(S_{t}\), and the agent takes action \(a_{t}=\pi(S_{t})\) following the policy \(\pi\) and receives a reward \(r_{t}=R(S_{t},a)\). After this, the environment transforms to the next state \(S_{t+1}\). This results in a sequence of states, actions, and rewards: \((S_{0},a_{0},r_{0},S_{1},a_{1},r_{1},...,S_{N-1},a_{N-1},r_{N-1},S_{N})\), where \(N\) is the number of time steps and \(S_{N}\) is an end state. The agent aims to maximize the accumulated reward \(\Sigma_{t}r_{t}\) by updating the policy \(\pi\). For FSM-based dynamic batching, the environment is the dataflow graph, which is encoded into states by the encoding function \(E\). For every iteration, the agent decides on the type for the next batch, receives a reward on that decision, and the environment gets updated according to Alg. 1. Now we elaborate on the state encoding, reward design, and training respectively. Before that, we explain some notations. For a dataflow graph \(G\), \(G_{t}\) refers to its status at step \(t\), \(G^{a}\) refers to the extracted subgraph of \(G\) composed solely of type \(a\) operations (See example in Fig.2(c)), \(Frontier(G)\) refers to the set of ready-to-execute operations, \(Frontier_{a}(G)\) refers to the subset of \(Frontier(G)\) with type \(a\). **State Encoding:** The design of state encoding should be simple enough to avoid runtime overhead. In practice, we experimented with three ways of encoding: (1) \(E_{base}(G)=\{v.type|v\in Frontier(G)\}\) is the set of operation types on the frontier, (2) \(E_{max}(G)=(E_{base}(G),argmax_{t\in T}|Frontier_{t}(G)|)\) is \(E_{base}(G)\) plus the most common type on the frontier and (3) \(E_{sort}(G)=sort(\{v.type\in T|v\in Frontier(G)\},t:|Frontier_{t}(G)|)\) is \(E_{1}(G)\) sorted by the number of occurrences on the frontier. Empirically, we found that \(E_{sort}\) was the best among the three (SS5.3). **Reward:** We design the reward to minimize the number of batches, thus increasing the parallelism exploited. The reward function is defined as \[r(S_{t},a_{t})=-1+\alpha*\frac{|Frontier(G_{t}^{a_{t}})|}{|Frontier_{a_{t}}(G _{t})|} \tag{1}\] where \(\alpha\) is a positive hyper parameter and \(S_{t}=E(G_{t})\). The constant \(-1\) in the reward penalizes every additional batch, thereby helping us minimize the number of batches. **Lemma 1** (Sufficient Condition for Optimal Batching).: 2 _If \(\frac{|Frontier(G_{t}^{a_{t}})|}{|Frontier_{a_{t}}(G_{t})|}=1\), then there exists a shortest batching sequence starting with \(a_{t}\)._ Footnote 2: Proof in the Appendix A.2 The second term is inspired by a sufficient condition for optimal batching (Lemma 1) to prioritize the type that all operations in the frontier of the subgraph of this type are ready to execute. For the tree-based network, this term prioritizes the batching choice made by the optimal batching policy in Fig. 2(a). For example, at iteration \(2\), this term is \(\frac{5}{7}\) and \(\frac{1}{1}\) for the \(O\) and \(I\) node respectively and the \(I\) node is given higher priority for batching. For other networks, like Figure 2: Dynamic Batching Policy by FSM the chained-based networks (Fig. 9), this sufficient condition continues to hold. **Training:** We adopt the tabular-based Q-learning (Watkins and Dayan, 1992) algorithm to learn the policy. And an N-step bootstrapping mechanism is used to increase the current batching choice's influence on longer previous steps. During the training phase, the algorithm learns a \(Q\) function, which maps each state and action pair to a real number indicating its score, for every new topology. We found that this simple algorithm learns the policy \(\pi\) within hundreds of trials (Table 3). During inference, at each state \(S\), we select the operation type with the highest \(Q\) value for the next batch, i.e. \(\pi(S)=argmax_{a}Q(S,a)\). This step is done by a lookup into stored \(Q\) functions in constant time. ## 3 Memory-efficient Batching for Static Subgraphs ### Background and Motivation In order to invoke a batched tensor operator in a vendor library, the source and result operands in each batch are usually required to be contiguous in memory (as per the vendor library specifications). Current batching frameworks such as Cavs and DyNet ensure this by performing explicit memory gather and/or scatter operations, leading to high data movement. On the other hand, as mentioned above, Cortex relies on specialized, hand-optimized batched kernels instead of relying on vendor libraries. This approach however, is unable to reuse the highly performant optimizations available as part of vendor libraries. In ED-Batch, we take a different approach to fit the memory layout into the batching policy, where operations in the source and result operands for batched execution are already contiguous in memory. We illustrate the approach by the example. Fig. 3(a) shows an example code for a static subgraph and Fig. 3(b) shows its batched version. In Fig. 3(c), we compare two memory layouts. On the left, we directly allocate memory according to the variable's label, then two memory gather for \([x_{1},x_{3}],[x_{2},x_{1}]\) and one scatter for \([x_{8},x_{6},x_{7}]\) is performed because they are either not contiguous or aligned in memory. We say an operand of a batch is aligned in memory if the order of its operations matches with the one in memory. Now, consider the memory allocation on the right, which allocates memory following \((x_{2},x_{1},x_{3},x_{4},x_{5},x_{8},x_{6},x_{7})\). Then, every source and result operand of the batched execution is already contiguous and aligned in memory, saving us from extra memory copies. ### PQ tree-based memory allocation To find the ideal layout, we designed an almost linear complexity memory allocation algorithm based on PQ tree, which is a tree-based structure used to solve the consecutive one property (Meidanis et al., 1998) and is previously applied to Biology for DNA-related analysis (Landau et al., 2005). We define the _ideal memory layout_ as a sequence of variables satisfying two constraints: * **Adjacency Constraint:** Result and source operands in every batch should be adjacent in the sequence. E.g. \(\{x_{4},x_{5}\},\{x_{1},x_{3}\},\{x_{2},x_{1}\}\) for \(B1\), \(\{x_{6},x_{7},x_{8}\},\{x_{4},x_{3},x_{5}\}\) for \(B2\) are adjacent in the sequence. * **Alignment Constraint:** The order of the result and source operands should be aligned in a batch. E.g. for \(B1\), \(x_{4}\prec x_{5}\iff x_{1}\prec x_{3}\iff x_{2}\prec x_{1}\) in the sequence. The adjacency constraint is satisfied by PQ tree (Booth and Lueker, 1976) algorithm. Given several subsets of a set \(S\), the PQ tree algorithm returns a data structure in linear time called PQ tree, representing potential permutations of \(S\) that elements in each subset are consecutive. Fig.4(a) shows the PQ tree for the example code. The tree has three kinds of nodes: P-node, Q-node, and leaf node. Leaf nodes represent the variables; P-nodes have more than two children, whose appearance in the sequence is contiguous but could be permuted; Q-nodes have more than one child, whose appearance in the sequence follows an order but could be reversed. A depth-first traversal of the leaf nodes gives the sequence. For example, there is one P-nodes and three Q-nodes in Fig.4(a). \(Q_{2}\) indicates the order should only be \((x_{2},x_{1},x_{3},Q_{3})\) or \((Q_{3},x_{3},x_{1},x_{2})\), while \(P_{2}\) indicates that one permutation of \(\{x_{6},x_{7},x_{8}\}\) appears in the sequence. The adjacency of \(\{x_{4},x_{5}\}\) is embedded in \(Q_{3}\), \(\{x_{1},x_{3}\},\{x_{2},x_{1}\},\{x_{4},x_{3},x_{5}\}\) in \(Q_{2}\), and \(\{x_{6},x_{7},x_{8}\}\) in \(P_{1}\). A possible sequence is \((x_{2},x_{1},x_{3},x_{4},x_{5},x_{6},x_{7},x_{8})\). To satisfy the alignment constraint, we annotate each node on the PQ tree with an order. An annotated PQ tree is shown in Fig.4(c), where a direction mark is attached to every Q-node, indicating its traversal order. As a result, any leaf node sequence of legal traversal on this annotated PQ tree indicates a memory allocation order satisfying the alignment constraint. Figure 3: Example on memory allocation. \(\alpha\), \(\sigma\) represent operators. Two passes obtain the order annotation to the PQ tree (Fig.4, Alg.2). The first pass, BroadcastConstraint, makes the tree structure of each batch's operands isomorphic. For \(B2\)'s operands, \(\{x_{3},x_{4},x_{5}\}\)'s tree structure is \(Q_{2}=(...,x_{3},Q_{3}=(x_{4},x_{5}))\), and \(\{x_{6},x_{7},x_{8}\}\)'s tree structure is \(P_{1}=(x_{6},x_{7},x_{8})\). They are made isomorphic in this pass. The second pass, DecideNodeOrder, derives the equivalent class of node and order pairs among P and Q-nodes and their orders and searches for a compatible solution for the direction of the Q-node and a permutation choice of the P-node complied with the equivalence relationship. We walk through the algorithm in the example. At first, the PQ tree is constructed by the standard algorithm to satisfy the adjacency constraint (Fig.4(a)). After that, in BroadcastConstraint, for \(B2\), we parse the adjacency constraint of it (line 11) by 1) parsing the adjacency constraint for each operand, e.g. \(\{x_{3},x_{4},x_{5}\}\) and \(\{x_{4},x_{5}\}\) for the source operand, \(\{x_{6},x_{7},x_{8}\}\) for the result operand, and 2) transforming them across operands by alignment information, e.g. \(\{x_{4},x_{5}\}\) is transformed into \(\{x_{6},x_{7}\}\). After that, \(\{x_{6},x_{7}\}\) is applied to the PQ tree3 and we keep records on the batch whose tree structure has changed (line 12). Now tree structures for \(B2\)'s operands are isomorphic, and we apply this process to other batches in a breadth-first search until no update on the tree structure happens. Footnote 3: Perform by standard Reduce step in the Vanilla PQ tree algorithm to satisfy adjacency constraint by restructuring the tree. In DecideNodeOrder (line 18), we assign directions for the Q-node and the permutation for the P-node. We start by parsing the equivalence relationship (line 22) among Q-node and direction pairs or the P-node and permutation pairs from the isomorphic tree structures after the first pass, e.g. \(<Q_{2},\leftarrow>\Longleftrightarrow<Q_{3},\leftarrow>\) for \(B1\), \(<Q_{3},\leftarrow>\Longleftrightarrow<Q_{4},\leftarrow>\) and \(<Q_{2},\leftarrow>\Longleftrightarrow<Q_{5},\rightarrow>\) for \(B2\). After that, we spread the equivalence relationship across batches with the support of _union-find data structure_. Shown in Fig.5, a graph carrying the equivalence relationship is constructed by iteratively Union equivalent relationship (line 23), i.e. the node and order pairs. When processing \(<Q_{2},\leftarrow>\Longleftrightarrow<Q_{5},\rightarrow>\), a \(R\) edge \(<Q_{2},Q_{5}>\) is added to the graph, indicating \(Q_{2}\)'s direction is determined by the reverse of \(Q_{5}\)'s direction. When processing the \(<Q_{2},\leftarrow>\Longleftrightarrow<Q_{3},\leftarrow>\), we first find the decider of their order, i.e. \(Q_{5}\) for \(Q_{2}\) and \(Q_{4}\) for \(Q_{3}\), and add an \(R\) edge between them. In this way, \(Q_{2}\) and \(Q_{3}\) always have the same order. Finally, the deciders in the graph are assigned arbitrary directions, which spread across the graph following the relationship on the edge (Fig.5(b)). The PQ tree memory allocation algorithm's time complexity is given in lemma 2, showing the PQ tree algorithm is linear to the problem size if the operations on a single batch are limited to a constant. **Lemma 2**.: _PQ tree memory allocation algorithm's time complexity is \(O(\Sigma_{b\inbatches}|b|\max_{b\inbatches}^{2}|b|)\), where \(|.|\) counts the operation in a batch._ Right now, PQ tree memory optimization is applied to the static subgraph because its execution time still does not fit Figure 4: Example for PQ tree-based algorithm. \(x_{1}\)-\(x_{8}\) are variables. Figure 5: Illustration for order decision in Pass2. into the high runtime constraint for dynamic DNNs. But the algorithm itself and the idea of better memory planning for batching is applicable to any batching problem. A detailed explanation of the algorithm is in AppendixB. ## 4 Implementation The optimizations in ED-Batch are fully automated and implemented as a runtime extension to DyNet in 5k lines of C++ code. The user can optionally enable the batching optimization by passing a flag when launching the application, and enable the static subgraph optimization by defining subgraphs in DyNet's language with a few annotations. Before execution, the RL algorithm learns the batching policy and ED-Batch optimizes the static subgraph by the approach in SS3. Upon execution, ED-Batch calls DyNet's executor for batched execution, which is supported by vendor libraries. ## 5 Evaluation ### Experiment Setup We evaluate our framework against DyNet and Cavs, two state-of-the-art runtimes for dynamic DNNs, which are shown to be faster than traditional frameworks like Pytorch and Tensorflow (Xu et al., 2018; Neubig et al., 2017a). Cavs' open-sourced version has worse performance than DyNet, as stated in Fegade, Chen, Gibbons, and Mowry (2021) because certain optimizations are not included. To make a fair comparison with Cavs, we use an extended version of DyNet with the main optimizations in Cavs enabled as a reference for Cavs' performance (referred to as Cavs DyNet). Namely, the static subgraphs in the network are pre-defined and batching optimization is applied to them. ED-Batch is implemented on top of this extended version, with the RL-based dynamic batching algorithm (\(E_{sort}\) for state encoding) and memory optimization on the static subgraph by PQ Tree. On the other side, the agenda-based algorithm and the depth-based algorithm are used for dynamic batching on Vanilla/Cavs DyNet. Depending on the workload and configuration, a better-performing algorithm is chosen for Vanilla/Cavs DyNet in the evaluation. We test the framework on 8 workloads, shown in Table 1. They follow an increase in dynamism, from chains to trees and graphs. Except for lattice-based networks, all workloads appeared as the benchmark for past works. We run our experiments on a Linux server with an Intel Xeon E2590 CPU (28-physical cores) and an Nvidia V100 GPU. The machine runs CentOS 7.7, CUDA 11.1, and cuDNN 8.0. We use DyNet's latest version (Aug 2022, commit c418b09) for evaluation. ### Overall Performance We compare ED-Batch's end-to-end inference throughput against Vanilla/Cavs DyNet. We follow past work to evaluate different batch sizes (1, 8, 32, 64, 128, 256) and model sizes (32, 64, 128, 256, 512), which is the size for the hidden vector length and the embedding size. The throughput is calculated as the maximum throughput among all bath size choices. For all cases, ED-Batch outperforms Vanilla DyNet significantly due to the reduction in graph construction and runtime overhead by pre-definition of the static subgraph. We now discuss the comparison with Cavs DyNet. For chain-based models, the BiLSTM-tagger and LSTM-NMT, ED-Batch achieved on average 1.20x, 1.11x speedup on CPU and 1.20x, 1.12x on GPU. Because the network structure is basically made up of chains, shown in Fig.9, both the agenda-based algorithm and the FSM-based batching algorithm find the optimal batching policy. On the other hand, the LSTMCell is 1.54x faster with the PQ-tree optimization compared to the one with the DyNet's memory allocation, which explains the speedup. For the tree-based model, compared to agenda/depth-based batching heuristic ED-Batch reduces the number of batches by 37%. This is because the FSM-based algorithm executes the output nodes in one batch (Fig.1). For TreeLSTM and TreeGRU, ED-Batch achieved on average 1.63x, 1.46x speedup on CPU and 1.23x, 1.29x speedup on GPU. ED-Batch's performance is close to Cavs DyNet on MVRNN because the execution is bounded by matrix-matrix multiplications which can hardly benefit from extra batch parallelism and the reduction in runtime overhead. For the lattice-based models, the LatticeLSTM and LatticeGRU, ED-Batch increases DyNet Cavs's throughput significantly by 1.32-2.97x on CPU and 2.54-3.71x on GPU, which is attributed to both the better dynamic batching and static subgraph optimization. For the lattice-based models' \begin{table} \begin{tabular}{p{113.8pt}|p{113.8pt}|p{113.8pt}} \hline \hline **Model** & **Short name** & **Dataset** \\ \hline A bi-directional LSTM Named Entity Tagger (Huang et al., 2015) & BiLSTM-Tagger & WikiNER English Corpus(Nothman et al., 2013) \\ \hline An LSTM-based encoder-decoder model for neural machine translation. & LSTM-NMT & IWSLT 2015 En-Vi \\ \hline N-ary TreeLSTM (Tai et al., 2015a) & TreeLSTM & Penn treebank (Marbank et al., 1994) \\ \hline An extension to TreeLSTM that contains two types of internal nodes, each with 50\% probability & TreeLSTM-2Type \\ \hline A lattice-based LSTM network for Chinese NER (Zhang & Yang, 2018) & Lattices generated based on Chinese Chinese Dataset\({}^{4}\) \\ \hline \hline \end{tabular} \end{table} Table 1: Models and datasets used in evaluation network structure in Fig.7, the FSM-based algorithm prioritizes the execution of the character cell and delays the execution of the word cell, whereas the depth/agenda-based algorithms batch the character cell and word cell more arbitrarily. As a result, the number of batches is reduced by up to 3.27 times (Fig.9). For the static subgraph, the used LSTMCell and GRUCell's latency is cut by 34% and 35%, which adds to the speedup. ### Analysis **Where does ED-Batch's speedup come from?** Shown in Fig.8, we decompose the inference pass into the construction time, scheduling time, and execution time. Construction time is the time to define the dataflow graph. Scheduling time is the time for dynamic batching analysis. Execution time is the left time of the forward pass, mainly composed of the execution of operations. While having similar construction/scheduling time, ED-Batch speeds up Cavs DyNet in the great cutdown in execution time benefited from better batching and fewer kernels for data movement. **Does the algorithm find a good enough batching policy?** Shown in Fig.9, compared to the agenda/depth-based batching algorithm, ED-Batch's FSM-based batching algorithm uniformly executes fewer batches. Among three state encoding choices, \(E_{sort}\) is slightly better because of the stronger expressiveness, finding the optimal batching policy on BiLSTM-daggerger, LSTM-NMT, and Tree-based models and executing 23% and 44% more batches on TreeLSTM-2Type, Lattice-Based models. To demonstrate the efficiency of the reward function, we measure the number of batches executed by a sufficient-condition-guided heuristic, which selects the type for the next batch that maximizes the second term in Eq.1. Shown in Fig. 9, this heuristic executes batches paramount to the best FSM-based algorithm. However, this heuristic has higher time complexity and adds to unacceptable runtime overhead. Thus, on the evaluated workloads, the FSM-based algorithm can be treated as a time-efficient distiller of this heuristic. **Ablation Study of the Static Subgraph Optimization.** In Figure 8: Cavs DyNet v.s. ED-Batch: Time Decomposition when model size is 128 and batch size is 64. Figure 7: Lattice Network for Chinese NER. The topology for one input sentence is a chain of character cells with jump links of word cells. Agenda/Depth-based batching algorithm fails to batch the word cells together. Figure 6: ED-Batch v.s. Vanilla/Cavs DyNet: Inference Throughput Figure 9: The number of batches for different batching algorithms. FSM-base/sort/max refers to the FSM based algorithm with different state encodings. Table 2, we evaluate ED-Batch's memory layout optimization on the static subgraphs. For all evaluated cases, the PQ-tree algorithm finds the _ideal memory allocation order_ (Remained data transfer is caused by broadcast that cannot be optimized by better memory layout). Compared to the baseline, ED-Batch reduces the latency of the static subgraph by up to 1.6x, memory kernels by up to 4x, and memory transfer amount by up to 66x. This significant reduction in memory transfer can be attributed to the better arrangement of the weight parameters. For example, there are four gates in the LSTM cell that perform feed-forward arithmetic \(y_{i}=W_{i}x_{i}+b_{i}\), which are executed in a batch. The memory arrangement in ED-Batch makes sure the inputs, parameters, and intermediate results of batched kernels are contiguous in the memory, which is not considered by DyNet's policy. Since the weight matrix occupies memory relative to the square of the problem size, this leads to a huge reduction in memory transfer. **Comparison with a more specialized framework.** Cortex (Fegade et al., 2021) is highly specialized for optimizing a class of recursive neural networks and it requires the user to not only express the tensor computation, but also specify low-level optimizations specific to underlying hardware, both through TVM's domain-specific language (Chen et al., 2018). We compare ED-Batch with Cortex on TreeLSTM and TreeGRU. To make more of an apples-to-apples comparison in terms of the user's effort in developing the application, we enabled Cortex's automated optimizations like _linearization_ and _auto-batching_ and used simple policies on optional user-given (manual) optimizations like kernel fusion and loop transformation (details in SSC). As shown in Table 5, ED-Batch can speed up Cortex by up to 3.98x. **The compilation overhead.** We trained the RL for up to 1000 trials and stopped early if the number of batches reaches the lower bound (check every 50 iterations). On the static subgraph, batching is performed as a grid search and the PQ tree optimization is applied afterward. As shown in Table 3 and Table 4, it takes tens of milliseconds to optimize the static subgraph and up to 22 seconds to learn the batching policy for tested workloads. ## 6 Related Work There are a variety of frameworks specialized for dynamic neural network training (Neubig et al., 2017; Looks et al., 2017; Xu et al., 2018), inference (Fegade et al., 2021; Feagade, 2023; Zha et al., 2019), and serving (Gao et al., 2018). Concerning batching for the dynamic neural networks, DyNet (Neubig et al., 2017) and TFFold (Looks et al., 2017) laid the system and algorithm foundation to support _dynamic batching_. However, their batching heuristics are often sub-optimal as we saw above. Nevertheless, their algorithms have been used in other frameworks, like Cavs (Xu et al., 2018) and Acrobat (Fegade, 2023). Apart from batching, another major direction of optimization is to extract the static information from the dynamic DNN and optimize them during compile time. Cavs (Xu et al., 2018) proposed the idea of predefining the static subgraphs, which was later extended in (Zha et al., 2019) to batch on different granularities. ED-Batch adopts this multi-granularity batching idea to perform batching on both the graph level and the subgraph level. For static subgraphs, traditional techniques are used for optimization, like the kernel fusion in Cavs and Cortex (Fegade et al., 2021), the AoT compilation (Kwon et al., 2020), and specialized kernel generation in ACRo-Bat (Fegade, 2023). However, though with high efficiency, these optimizations can hardly be automated because either the developer or the user needs to optimize each subgraph manually. In ED-Batch, fully automated runtime optimizations are used instead to enable both efficient execution and generalization. ## 7 Conclusion In ED-Batch, we designed an FSM-based algorithm for batched execution for dynamic DNNs. Also, we mitigated the memory transfer introduced by batching through the memory layout optimization based on the PQ-tree algorithm. The experimental results showed that our approach achieved significant speedup compared to current frameworks by the reduction in the number of batches and data movement. \begin{table} \begin{tabular}{l c c c c c} \hline \hline & \multicolumn{2}{c}{TreeGRU} & \multicolumn{2}{c}{TreeLSTM} \\ batch\_size & model\_size & Cortex & Ours & Cortex & Ours \\ \hline 10 & 256 & 2.30 & **2.27** & **2.244** & 2.78 \\ & 512 & 5.60 & **3.04** & 8.500 & **4.70** \\ 20 & 256 & 3.73 & **3.03** & **3.460** & 3.52 \\ & 512 & 11.70 & **3.70** & 19.210 & **4.82** \\ \hline \hline \end{tabular} \end{table} Table 5: ED-Batch v.s. Cortex: Inference Latency (ms). \begin{table} \begin{tabular}{l c c c c c} \hline \hline & \multicolumn{2}{c}{Latency (ms)} & \multicolumn{2}{c}{Mem Kernel/Subgraph} & \multicolumn{2}{c}{Memory Amount (kB)} \\ Subgraph & value & ratio & value & ratio & value & ratio \\ \hline GRUCell & 0.11/**0.07** & 1.54 & 6/2 & 3.0 & 666.0/ **1.40** & 47.57 \\ LSTMCell & 0.2/**0.13** & 1.52 & **4/**1 & 4.0 & 1054.0/ **1.60** & 65.87 \\ MVCell & **0.20/**0.08** & 0.96 & 2.2 & 1.0 & 260.0/ **2.00** & 1.0 \\ TreeGRU-Internal & 0.2/**0.15** & 1.6 & **8/2** & 4.0 & 552.0/ **1.60** & 34.5 \\ TreeGRU-Leaf & 0.09/**0.07** & 1.4 & **4/2** & 2.0 & 268.0/ **8** & 33.5 \\ TreeLSTM-Internal & 0.19/**0.12** & 1.61 & 7/3 & 2.33 & 1054.0/ **2.20** & 48.36 \\ TreeLSTM-Leaf & 0.12/**0.09** & 1.27 & 3/**1** & 3.0 & 396.0/ **0.60** & 66.0 \\ \hline \hline \end{tabular} \end{table} Table 2: Batching with DyNet’s memory allocation (left) v.s. Batching with PQ tree-based memory allocation on static subgraphs. batch size = 8, model size = 64. \begin{table} \begin{tabular}{l c c} \hline \hline & \multicolumn{2}{c}{TreeGRU} & \multicolumn{2}{c}{TreeLSTM} \\ batch\_size & model\_size & Cortex & Ours \\ \hline 10 & 256 & 2.30 & **2.27** & **2.244** & 2.78 \\ & 512 & 5.60 & **3.04** & 8.500 & **4.70** \\ 20 & 256 & 3.73 & **3.03** & **3.460** & 3.52 \\ & 512 & 11.70 & **3.70** & 19.210 & **4.82** \\ \hline \hline \end{tabular} \end{table} Table 2: Batching with DyNet’s memory allocation (left) v.s. Batching with PQ tree-based memory allocation on static subgraphs. batch size = 8, model size = 64. ## Acknowledgements We thank Peking University's supercomputing team for providing the hardware to run the experiments and thank Kevin Huang of CMU for his help in early exploration of this research space.
2304.10553
Sparsity in neural networks can improve their privacy
This article measures how sparsity can make neural networks more robust to membership inference attacks. The obtained empirical results show that sparsity improves the privacy of the network, while preserving comparable performances on the task at hand. This empirical study completes and extends existing literature.
Antoine Gonon, Léon Zheng, Clément Lalanne, Quoc-Tung Le, Guillaume Lauga, Can Pouliquen
2023-04-20T09:44:24Z
http://arxiv.org/abs/2304.10553v2
# Sparsity can improve privacy of neural networks ###### Abstract This work measures how sparsity can make neural networks more robust to membership inference attacks. The obtained empirical results show that sparsity improves the privacy of the network, while preserving comparable performances on the task at hand. This empirical study completes and extends existing literature. ## 1 Introduction Deep neural networks are state-of-the-art for many learning problems. In practice, it is possible to tune the parameters of a given network in order to perfectly interpolate the available data [21]. This overfitting regime is of practical interest since good performances can be obtained this way [3]. However, it comes with an increased risk in terms of privacy [14], since the network memorizes information about training data, up to the point of interpolating them. Among these information, some might be confidential. This raises the question of what information can be inferred given a black-box access to the model. To detect an overfitting situation, an indicator is given by the ratio of the number of parameters by the number of data points available: the more parameters there are, the more the model can interpolate the data. In order to hinder the capacity of the model to overfit, and thus to store confidential information, this work studies the role of the number of nonzero parameters used. _Can we find a good trade-off between model accuracy and privacy by tuning the sparsity (number of nonzero parameters) of neural networks?_ Attacks such as "Membership Inference Attack" (MIA) can infer the membership of a data point to the training set [16], using only a black-box access to the targeted model. This can be problematic in case of sensitive data (medical data, etc.). Given a network, how could one reduce the risk of such attacks, while preserving its performances as much as possible? Numerous procedures have been proposed to defend against MIAs [8]. In this work, the studied approach consists in decreasing the number of nonzero parameters used by the network in order to reduce its memorization capacity, while preserving as much as possible its accuracy. Related works.The links between neural network sparsity and privacy have already been partially explored, but, to the best of our knowledge, it has not yet been shown that sparsity improves privacy _without further adjustment_ of the training algorithm. A comparison with literature is done in section 4. Contributions and results.The results of the experiments in section 4 support the hypothesis that sparsity improves the defense against MIAs while maintaining comparable performances on the learning task. However, the standard deviations reported in the experiments suggest that larger scale experiments are needed before confirming this trend. Figure 1 shows that the trade-off between robustness to MIA and network accuracy is similar between unstructured sparsity, obtained by an Iterative Magnitude Pruning (IMP) [5] of the weights, and structured "butterfly" sparsity, where the weights matrices are constrained to admit some structured sparse factorization [12, 4]. To the best of our knowledge, the "butterfly" structure has not been studied before in this context. This structure achieves similar trade-offs as IMP, which is remarkable, as the structure is fixed beforehand, independently of the data. Figure 1: Means and standard deviations of the accuracy and defense level of various sparse networks. The percentage of nonzero weights is given in blue for IMP (\(\star\) p\(\%\)), and in red for Butterfly (\(\star\) p\(\%\)). The color emphasizes the sparsity level. The line has a slope of \(-3.25\). Moreover, software and hardware optimizations can be envisioned to leverage butterfly sparsity in order to implement matrix-vector multiplications in a more efficient way than it is without sparsity or with unstructured sparsity. Experiments on CIFAR-10 show that when the percentage of nonzero weights in ResNet-20 is between \(3.4\%\) and \(17.3\%\), a relative loss of \(p\%\) in accuracy, compared to the trained dense network 1, leads to a relative gain of \(3.6\times p\%\) in defense against MIA, see Figure 1. Footnote 1: The dense network is the original network, with 100% of the nonzero weights. Section 2 introduces the MIAs used for the experiments. Section 3 describes the types of sparsity used to defend against MIAs. The results of the experiments are presented in section 4, with a comparison to literature. ## 2 MIA with a shadow model Let \(\mathcal{D}\) be a dataset and \(\mathcal{D}_{\text{train}}\subset\mathcal{D}\) be a training subset. The associated _membership function_\(m_{\mathcal{D}_{\text{train}},\mathcal{D}}\) is defined by: \[m_{\mathcal{D}_{\text{train}},\mathcal{D}}:x\in\mathcal{D}\mapsto\left\{ \begin{array}{ll}1&\text{if }x\in\mathcal{D}_{\text{train}},\\ 0&\text{otherwise}.\end{array}\right.\] Given a dataset \(\mathcal{D}^{\text{target}}\), and a target network \(\mathcal{R}^{\text{target}}\) trained on a subset \(\mathcal{D}^{\text{target}}_{\text{train}}\) of \(\mathcal{D}^{\text{target}}\), a MIA consists in retrieving the associated membership function \(m_{\text{target}}:=m_{\mathcal{D}^{\text{target}}_{\text{train}},\mathcal{D}^ {\text{target}}}\), with only a _black-box_ access to the function \(x\mapsto\mathcal{R}^{\text{target}}(x)\). Most of the known attacks are based on an observation of the output of the \(\mathcal{R}^{\text{target}}\) model, locally around \(x\)[8]. In general, these attacks seek to measure the confidence of the model in its predictions made locally around \(x\). If the measured confidence is high enough, then the attacker answers positively to the membership question. In practice, the most efficient attacks consist in training a discriminator model that makes a decision based on local information of \(\mathcal{R}^{\text{target}}\) around \(x\). This discriminator is trained from a _shadow_ network [8], as explained below (see also Figure 2). Suppose that the attacker has access to a dataset \(\mathcal{D}^{\text{shadow}}\) from the same distribution as \(\mathcal{D}^{\text{target}}\). It then trains its own shadow network \(\mathcal{R}^{\text{shadow}}\) on a subset \(\mathcal{D}^{\text{shadow}}_{\text{train}}\) of the data it owns. Ideally, \(\mathcal{R}^{\text{shadow}}\) is trained under the same conditions as \(\mathcal{R}^{\text{target}}\) (same architecture and same optimization algorithm). The attacker then has a tuple \((\mathcal{R}^{\text{shadow}},\mathcal{D}^{\text{shadow}},\mathcal{D}^{\text{ shadow}}_{\text{train}})\) which is similar to \((\mathcal{R}^{\text{target}},\mathcal{D}^{\text{target}},\mathcal{D}^{\text{ target}}_{\text{train}})\), and he knows the shadow membership function \(m_{\text{shadow}}:=m_{\mathcal{D}^{\text{shadow}}_{\text{train}},\mathcal{D}^ {\text{shadow}}}\). Discriminator.The attacker can then train a discriminator to approximate \(m_{\text{shadow}}\), given a black box access to \(\mathcal{R}^{\text{shadow}}\). This discriminator can then be used to approximate \(m_{\text{target}}\) given a black box access to \(\mathcal{R}^{\text{target}}\). The model for the discriminator can be any classical classifier (logistic regression, neural network, etc.) [8]. ## 3 Defense and neural network pruning Training sparse neural networks is first motivated by needs for frugality in resources (memory, inference time, training time, etc.). Here, the following hypothesis is investigated: sparsity can limit the model's ability to store confidential information about the data it has been trained on. A perfectly confidential network has not learned anything from its data and has no practical interest. A trade-off between confidentiality and accuracy must be made according to the task at hand. In what follows, two types of sparsity are considered. ### Unstructured sparsity via IMP In the first case, no specific structure is imposed on the set of nonzero weights. The weights that are set to zero (pruned) are selected by an iterative magnitude pruning process (IMP) [5]: (i) train a network the usual way, (ii) prune \(p\%\) of the weights having the smallest magnitude, (iii) adjust the remaining weights by re-training the network (weights that have been pruned are masked and are no longer updated), then go back to (ii) until the desired level of sparsity is reached. This procedure allows to find sparse networks with empirical good statistical properties [5, 6]. ### Structured butterfly sparsity In the second case, the sparsity is structured: the weight matrices of the neural network are constrained to admit a "butterfly" factorization [11], for which the associated matrix-vector multiplication can be efficiently implemented [4]. A square matrix \(\mathbf{W}\) of size \(N:=2^{L}\) has a butterfly factorization if it can be written as an exact product \(\mathbf{W}=\mathbf{X}^{(1)}\dots\mathbf{X}^{(L)}\) of \(L\) square factors of size \(N\), where each factor satisfies the support constraint2\(\text{supp}(\mathbf{X}^{(G)})\subseteq\text{supp}(\mathbf{S}_{\text{hf}}^{(G)})\), with \(\mathbf{S}_{\text{hf}}^{(G)}:=\mathbf{I}_{d^{-1}\otimes\left[\begin{smallmatrix} 1&1\\ 1&1\end{smallmatrix}\right]\otimes\mathbf{I}_{N/2^{d}}}\). See Figure 3 for an illustration. The factors have at most two nonzero entries per row and per column. Leveraging this factorization, matrix-vector multiplication has a complexity of \(\mathcal{O}(N\log N)\), against \(\mathcal{O}(N^{2})\) in general. Footnote 2: \(\text{supp}(\cdot)\) is the set of nonzero entries of a matrix, \(\mathbf{I}_{N}\) is the identity matrix of size \(N\times N\), and \(\otimes\) is the Kronecker product. To enforce the butterfly structure in a neural network, the weight matrices \(\mathbf{W}\) are parameterized as \(\mathbf{W}=\mathbf{X}^{(1)}\dots\mathbf{X}^{(L)}\), and only the nonzero coefficients of \(\mathbf{X}^{(1)},\dots,\mathbf{X}^{(L)}\) are initialized and then optimized by stochastic gradient descent. In general, for a matrix \(\mathbf{W}\) of arbitrary size, it is also possible to impose a similar structure but the definitions are more involved. We refer the reader to [12]. In the case of a convolution layer, the matrix \(\mathbf{W}\) for which we impose such a structure corresponds to the concatenation of convolution kernels [12]. In our experiments, for a fixed size of \(\mathbf{W}\) and a fixed number of factors \(L\), the rectangular butterfly factorization is parameterized according to a _monotone_ chain following [12]. Among all possible chains, the one with the minimal number of parameters is selected. Butterfly networks can reach empirical performances comparable to a dense network on image classification tasks [4, 12]. ## 4 Experimental results All hyperparameters (including the discriminator architecture) have been determined following a grid search, averaged on three experiments to take into account randomness. DatasetExperiments are performed on the CIFAR-10 dataset (60000 images \(32\times 32\times 3\), 10 classes). The dataset is randomly (uniformly) partitioned into 4 subsets \(\mathcal{D}_{\text{min}}^{\text{target}}\), \(\mathcal{D}_{\text{test}}^{\text{target}}\), \(\mathcal{D}_{\text{head}}^{\text{thand}}\), \(\mathcal{D}_{\text{head}}^{\text{thand}}\) of 15000 images, respectively used to train and test the target and shadow networks. The membership functions are defined as in section 2, with \(\mathcal{D}^{\text{target}}:=\mathcal{D}_{\text{train}}^{\text{target}}\cup \mathcal{D}_{\text{test}}^{\text{target}}\) and \(\mathcal{D}^{\text{thandor}}:=\mathcal{D}_{\text{train}}^{\text{thandor}}\cup \mathcal{D}_{\text{test}}^{\text{thandor}}\). For the target and shadow network, among their 15000 training data points, 1000 are randomly chosen and fixed for all our experiments as a validation set (used to tune the hyperparameters, and for the stopping criterion). Training of the target and shadow models.The target and shadow networks have a ResNet-20 architecture [7] (272474 parameters). They are trained to minimize the cross-entropy loss by stochastic gradient descent (with \(0.9\) momentum and no Nesterov acceleration) on their respective training sets for 300 epochs, with a batch size of 256. The dataset is augmented with random horizontal flipping and random cropping. The initial learning rate is divided by 10 after 150 and after 225 epochs. The weights of the neural networks are initialized with the standard method on Pytorch, following a uniform distribution on \((-1/\sqrt{n},1/\sqrt{n})\) where \(n\) is the input dimension for a linear layer, and \(n\) is input dimension \(\times\) kernel width \(\times\) kernel height for a convolution. Values of initial learning rate and weight decay are reported in table 1. Note that the chosen hyperparameters allow to reproduce results of [7] when using the whole 50000 training images of CIFAR-10 instead of 15000 of them as it is done for the target and shadow networks. For IMP, 24 prunings and readjustments of the parameters are performed. Each readjustment is done with the same training procedure as above (300 epochs, etc.). Before each pruning, the weights are rewound to the values they had at the end of the epoch of maximum validation accuracy in the last \(300\) epochs. For training ResNet-20 with the butterfly structure, the original weight matrices of some convolution layers are substituted by matrices admitting a butterfly factorization, with a number \(L=2\) or 3 of factors, following a monotonic chain minimizing the number of parameters in the factorization, as described in section 3.2. The substituted layers are those of the \(S=1,2\) or 3 last segments3 of ResNet-20. Footnote 3: A segment is three consecutive basic blocks with the same number of filters. A basic block is two convolutional layers surrounded by a residual connection. Discriminator training.A discriminator takes as inputs the class \(i\) of \(x\), the prediction \(\mathcal{R}(x)\) made by a network \(\mathcal{R}\) (target or shadow), as well as \(\frac{1}{\epsilon}\mathbb{E}\left(\left|\mathcal{R}(x)-\mathcal{R}(x+\epsilon \mathcal{N})\right|\right)\) (\(\epsilon=0.001\) and \(\mathcal{N}\), an independent centered and reduced Gaussian vector) that encodes local first order information of \(\mathcal{R}\) around \(x\). The expectation is estimated by averaging over 5 samples. For each pair of networks (\(\mathcal{R}^{\text{target}}\),\(\mathcal{R}^{\text{shadow}}\)), three discriminators (perceptrons) are trained, with respectively 1, 2, 3 hidden layer(s) and 30, 30, 100 neurons on each hidden layer. The binary cross entropy is minimized with Adam for 80 epochs, without weight decay and for three different learning rates \(\{0.01,0.001,0.0001\}\). \begin{table} \begin{tabular}{|c|c|c|c|} \hline Networks & \(\oplus\) of nonzero params & Initial learning rate & Weight decay \\ \hline ResNet-20 dense & 100 \% & 0.03 & 0.005 \\ Butterfly (\(S-1.L-2\)) & 32.3 \% & 0.3 & 0.0005 \\ Butterfly (\(S-1.L-2\)) & 29.6 \% & 0.3 & 0.0001 \\ Butterfly (\(S-2.L-2\)) & 15.9 \% & 0.3 & 0.0005 \\ Butterfly (\(S-2.L-2\)) & 12.9 \% & 0.1 & 0.001 \\ Butterfly (\(S-3.L-2\)) & 11.8 \% & 0.3 & 0.0005 \\ Butterfly (\(S-3.L-3\)) & 8.5 \% & 0.1 & 0.001 \\ DLP with \(i\) prunings & \(\approx 100\times(0.8\%)\%\) & 0.03 & 0.005 \\ \hline \end{tabular} \end{table} Table 1: Hyperparameters for the training of the target and the shadow neural networks. Figure 3: - Supports in a butterfly factorization of size \(N=16\). Accuracy and defenseThe _accuracy_ of a network is the percentage of data whose class is the one predicted with the highest probability by the network. The _defense_\(D\) of a network against a discriminator is defined as \(D=200-2A\) where \(A\) is the accuracy of the discriminator on theership classification task associated with the training and test data of the considered network. For example, if a discriminator has an attack accuracy \(A=50+x\), then the defense is \(D=100-2x\). In our case, there are as much training and testing data points for the network (target or shadow). Ideally, the discriminator should not do better than guessing randomly, having then an accuracy of \(50\%\). ResultsDense target and shadow networks achieve on average \(87.5\%\) accuracy on the test set. This accuracy decreases with sparsity, see Figure 1. A gain (or loss) in defense is significant if the interval with upper (resp. lower) bound being the mean plus (resp. minus) the standard deviation is disjoint from the interval corresponding to the trained dense network. A significant gain (or loss) in defense is only observed for a proportion of nonzero weight between \(0\%\) and \(17.3\%\), and for \(41.4\%\) and \(51.5\%\). Between \(3.4\%\) and \(17.3\%\), a relative loss of \(p\%\) in accuracy, compared to the trained dense network, leads to a relative gain of \(3.6\times p\%\): \(3.6\simeq\frac{\left|\text{defense}-\text{defense}\text{dense}\right|}{ \text{defense}\text{dense}}\frac{\text{accuracy}\text{dense}}{\left| \text{accuracy}-\text{accuracy}\text{dense}\right|}\). Related work on sparsity as a defense mechanism.Experimental results from [20] suggest on the contrary that training a network with sparse regularization from IMP _degrades_ privacy. But these results were not averaged over multiple experiments to reduce variability due to randomness. The experiments of [20] are also performed on CIFAR-10 but with a model with 40 times as many weights as ResNet-20, and for a proportion of nonzero weights above \(50\%\). Given the standard deviations observed in Figure 1 for sparsity levels above \(20\%\) on ResNet-20, one should remain cautious about the interpretation of the results of [20]. [18] also showed recently that decreasing the number of parameters of a model can improve defense to MIAs. This is complementary to this article. Note however that the way the number of parameters are reduced are fundamentally different since [18] consider smaller _dense networks_ while, here, _sparse subnetworks_ are consider. These types of networks may not have the same privacy-accuracy trade-off. Given a sparsity level, [19] looks for the parameters that minimize the loss function of the learning problem, penalized by the highest MIA attack accuracy achievable against these parameters. Note that this penalty term is in general not explicitly computable, and difficult to minimize. Moreover, this requires to know in advance the type of attack that targets the network, e.g., the architecture of the attacker, etc. No comparison with the non-penalized case has been proposed in [19], which makes it unclear whether this penalization is necessary to improve privacy or if sparsity _without additional penalization_ is sufficient. In contrast, our experiments do suggest the latter. Moreover, [19] only displays the defense achieved at the sparsity level with the smallest penalized loss function. In comparison, Figure 1 shows the robustness to MIAs for a whole range of different sparsity levels. Finally, it has been observed that enforcing sparsity during the training of neural networks with DP-SGD ("Differentially Private Stochastic Gradient Descent") [1, 2] improves the accuracy, compared to the dense network, while keeping the same guarantees of Differential Privacy (giving strong privacy guarantees) [9, 2]. However, compared to SGD, DP-SGD suffers from a performance drop and a high computational demand that is prohibitive for large-scale experiments [15, 10]. In contrast, the privacy enhancement investigated in this work comes at a lower cost (in both accuracy and resources) but does not provide any theoretical guarantee. ## 5 Conclusion The results obtained support the following conjecture: sparsity is a defense mechanism against membership inference attacks, as it reduces the effectiveness of attacks with a relatively low cost on network accuracy. This is in particular the case for structured butterfly sparsity, which had not yet been investigated in this context to the best of our knowledge. Extending the experiments to a richer class of models, datasets and attacks would support the interest of sparsity as a defense mechanism. In the future, sparsity could serve as a baseline to decrease privacy threats since it comes at a lower computational cost than methods providing strong theoretical guarantees such as DP-SGD, does not require to know the kind of attack in advance, allows for fast matrix-vector multiplication when using structured sparsity such as the butterfly structure, and, compared to penalized loss where the attacker could infer the typical behaviour of the model on training data [17], it may not lead to bias easily exploitable by an attacker.
2305.08021
TIPS: Topologically Important Path Sampling for Anytime Neural Networks
Anytime neural networks (AnytimeNNs) are a promising solution to adaptively adjust the model complexity at runtime under various hardware resource constraints. However, the manually-designed AnytimeNNs are biased by designers' prior experience and thus provide sub-optimal solutions. To address the limitations of existing hand-crafted approaches, we first model the training process of AnytimeNNs as a discrete-time Markov chain (DTMC) and use it to identify the paths that contribute the most to the training of AnytimeNNs. Based on this new DTMC-based analysis, we further propose TIPS, a framework to automatically design AnytimeNNs under various hardware constraints. Our experimental results show that TIPS can improve the convergence rate and test accuracy of AnytimeNNs. Compared to the existing AnytimeNNs approaches, TIPS improves the accuracy by 2%-6.6% on multiple datasets and achieves SOTA accuracy-FLOPs tradeoffs.
Guihong Li, Kartikeya Bhardwaj, Yuedong Yang, Radu Marculescu
2023-05-13T22:58:25Z
http://arxiv.org/abs/2305.08021v2
# TIPS: Topologically Important Path Sampling for Anytime Neural Networks ###### Abstract Anytime neural networks (AnytimeNNs) are a promising solution to adaptively adjust the model complexity at runtime under various hardware resource constraints. However, the manually-designed AnytimeNNs are biased by designers' prior experience and thus provide sub-optimal solutions. To address the limitations of existing hand-crafted approaches, we first model the training process of AnytimeNNs as a discrete-time Markov chain (DTMC) and use it to identify the paths that contribute the most to the training of AnytimeNNs. Based on this new DTMC-based analysis, we further propose _TIPS_, a framework to automatically design AnytimeNNs under various hardware constraints. Our experimental results show that TIPS can improve the convergence rate and test accuracy of AnytimeNNs. Compared to the existing AnytimeNNs approaches, TIPS improves the accuracy by 2%-6.6% on multiple datasets and achieves SOTA accuracy-FLOPs tradeoffs. Machine Learning, ICML, ICML ## 1 Introduction In recent years, deep neural networks (DNNs) have been successful in many areas, such as computer vision or natural language processing (Vaswani et al., 2017; Dosovitskiy et al., 2021). However, the intensive computational requirements of existing large models limit their deployment on resource-constrained devices for Internet-of-Things (IoT) and edge applications. To improve the hardware efficiency of DNNs, multiple techniques have been proposed, such as quantization (Qin et al., 2020; Han et al., 2016), pruning (Luo et al., 2017; Han et al., 2015), knowledge distillation (Hinton et al., 2015), and neural architecture search (NAS) (Zoph & Le, 2017; Liu et al., 2019; Stamoulis et al., 2019; Li et al., 2020; 2023). We note that all these techniques focus on generating _static_ neural architectures that can achieve high accuracy under specific hardware constraints. Recently, anytime neural networks (AnytimeNNs) have been proposed as an orthogonal direction to static neural networks (Huang et al., 2018; Yu & Huang, 2019; Bengio et al., 2015; Wang et al., 2021; Yang et al., 2021). AnytimeNNs adjust the model size at runtime by selecting sub-networks from a static supernet (Chen et al., 2019; Li et al., 2019; Yu & Huang, 2019; Yu et al., 2019). Compared to the static techniques, AnytimeNNs can automatically adapt (at runtime) the model complexity based on the available hardware resources. However, the existing AnytimeNNs are manually designed by selecting a few candidate subnetworks. Hence, such hand-crafted AnytimeNNs are likely to miss the subnetworks that can offer better performance. These limitations of existing manual design approaches motivate us to analyze the properties of AnytimeNNs and then provide a new algorithmic solution. Specifically, in this work, we address two **key questions:** 1. _How can we quantify the importance of various operations (e.g., convolutions, residual additions, etc.) to the convergence rate and accuracy of AnytimeNNs?_ 2. _Are there topological (i.e., related to network structure) properties that can help us design better AnytimeNNs?_ To answer these questions, we analyze the AnytimeNNs from a _graph theory_ perspective. This idea is motivated by the observation that the topological features of DNNs can accurately indicate their gradient propagation properties and test performance (Bhardwaj et al., 2021; Li et al., 2021). Inspired by the network structure analysis, given an AnytimeNN, we propose a Discrete-Time Markov Chain (DTMC)-based framework to explore the relationships among different subnetworks. We then propose two new topological metrics, namely _Topological Accumulated Score_ (TAS) and _Topological Path Score_ (TPS) to analyze the gradient properties of AnytimeNNs. Based on these two metrics, we finally propose a new training method, _i.e., Topologically Important Path Sampling_ (TIPS), to improve the convergence rate and test performance of AnytimeNNs. The experimental results show that our proposed approach outperforms SOTA approaches by a significant margin across many models and datasets (see Fig. 1). Overall, we make the following **key contributions:** * We propose a new importance analysis framework by modeling the AnytimeNNs as DTMCs; this enables us to capture the relationships among different subnetworks of AnytimeNNs. * Based on the DTMC-based framework, we propose two new topological metrics, _Topological Accumulated Score_ (TAS) and _Topological Path Score_ (TPS), which can characterize the operations that contribute the most to the training of AnytimeNNs. * We propose a new theoretically-grounded training strategy for AnytimeNNs, namely, _Topologically Important Path Sampling_ (TIPS), based on our importance analysis framework. We show that TIPS achieves a faster convergence rate compared to SOTA training methods. * We demonstrate that TIPS enables the automatic design of better AnytimeNNs under various hardware constraints. Compared to existing AnytimeNN methods, TIPS improves the accuracy by 2%-6.6% and achieves SOTA accuracy-FLOPs tradeoffs on multiple datasets, under various hardware constraints (see Fig. 1). The rest of the paper is organized as follows. In Section 2, we discuss related work. In Section 3, we formulate the problem and introduce our proposed solution (TIPS). Section 4 presents our experimental results and outline directions for future work. Finally, Section 5 concludes the paper. ## 2 Related Work There are three major directions related to our work: ### Anytime Inference Anytime neural networks (AnytimeNNs) can adapt the model complexity at runtime to various hardware constraints; this is achieved by selecting the optimal subnetworks of a given (static) architecture (_supernet_), while maintaining the test accuracy. The runtime adaptation of AnytimeNNs is primarily driven by the available hardware resources (Yuan et al., 2020). For instance, early-exit networks can stop the computation at some intermediate layers of the supernet and then use individual output layers to get the final results (Wang et al., 2018; Veit and Belongie, 2018; Bolukbasi et al., 2017). Similarly, skippable networks can bypass several layers at runtime (Wang et al., 2020; Larsson et al., 2017; Wu et al., 2018). Alternatively, approaches for slimmable networks remove several channels of some layers at runtime (Lee and Shin, 2018; Bejnordi et al., 2020; Yang et al., 2018; Hua et al., 2019; Li et al., 2021; Chin et al., 2021; Gao et al., 2019; Tang et al., 2021). Finally, multi-branch networks select the suitable branches of networks to reduce the computation workload to fit the current hardware constraints (Cai et al., 2021; Ruiz and Verbeek, 2021; Huang et al., 2018; Liu et al., 2020). ### Layerwise Dynamical Isometry (LDI) LDI is meant to quantify the gradient flow properties of DNNs (Saxe et al., 2014; Xiao et al., 2018; Burkholz and Dubatovka, 2019). For a deep neural network, let \(x_{i}\) be the output of layer \(i\); the Jacobian matrix of layer \(i\) is defined as: \(J_{i,i-1}=\frac{\partial x_{i}}{\partial x_{i-1}}\). Authors of (Lee et al., 2020) show that if the singular values of \(J_{i,i-1}\) for all \(i\) at initialization are close to 1, then the network satisfies the LDI, and the magnitude of the gradient does not vanish or explode, thus benefiting the training process. ### Network Topology Previous works show that the topological properties can significantly impact the convergence rate and test performance of deep networks. For example, by modeling deep networks as graphs, authors in (Bhardwaj et al., 2021) prove that the average node degrees of deep networks are highly correlated with their convergence speeds. Lately, (Chen et al., 2022) developed a similar understanding of neural networks' connectivity patterns on its trainability. Moreover, several works also show that some specific topological properties of deep networks can indicate their test accuracy (Li et al., 2021; Javaheripi et al., 2021). We note that these existing approaches primarily focus on networks with a static structure. The relationship between topological properties and the convergence/accuracy of networks with varying architectures (_e.g._, AnytimeNNs) remains an open question. This motivates us to investigate the topological properties of AnytimeNNs. ## 3 Approach In this work, for a given deep network (_i.e._, supernet), our goal is to automatically find its AnytimeNN version under Figure 1: Test Accuracy vs. FLOPs on ImageNet. TIPS achieves higher accuracy (given the same or even fewer FLOPs) than SOTA AnytimeNNs: Joslim (Chin et al., 2021) and US-Nets (Yu and Huang, 2019). various hardware constraints. To this end, our approach consists of three major steps: (_i_) Characterize the importance of each operation (convolution, residual addition, _etc._). As such, we model the training process of AnytimeNNs as a DTMC and use it to analyze their topological properties (Fig. 2). (_ii_) Based on this importance analysis, we then propose a new training strategy (TIPS) to improve the accuracy of AnytimeNNs. (_iii_) Finally, we search for the AnytimeNNs under various hardware constraints. Next, we discuss these steps in detail. ### Modeling AnytimeNNs as Markov Chains #### 3.1.1 Modeling AnytimeNNs as Graphs As shown in Fig. 2(a), we model various DNN operations (convolutions, residual additions, _etc._) as _edges_, and model the outputs of such operations (_i.e._, featuremaps) as _nodes_ in a graph. This way, a given architecture (supernet) is represented by a static undirected graph \(G=(V,E)\), where \(V\) is the set of nodes (with \(|V|=N\)) and \(E\) is the set of edges between nodes (with \(|E|=M\)). For a given DNN architecture, its corresponding AnytimeNNs select suitable subnetworks under the current hardware constraints. Specifically, these _subnetworks_\(G_{k}\) are obtained by sampling edges from the initial supernet \(G\): \[G_{k}=(V,E_{k}),E_{k}\subseteq E \tag{1}\] where, the node set \(V\) is the same for all subnetworks, but different subnetworks \(G_{k}\) have different edge sets \(E_{k}\) (see Fig. 2(a)). To ensure that the sampled subnetwork is valid, we always sample the input, output, and down-sample layers (_e.g._, layers with pooling or stride=2). To ensure the validity of a subnetwork, we first randomly decide whether or not each layer remains in the subnetwork. For the remaining layers, we also ensure that #channels in consecutive layers match. As shown in Fig. 2(a), based on the topology of \(G_{k}\), we can construct the adjacency matrix \(\mathbf{A}_{k}\in R^{N\times N}\) for a subnetwork as follows: \[\begin{cases}\mathbf{A}_{k}(s,t)=0,&\text{if }(s,t)\notin E_{k}\\ \mathbf{A}_{k}(s,t)=1,&\text{if }(s,t)\in E_{k}\\ \mathbf{A}_{k}(s,t)=1,&\text{if }s=t=1\text{ or }s=t=N\end{cases} \tag{2}\] where each edge \((s,t)\) corresponds to an operation in the given network. The intuition behind the values of \(\mathbf{A}_{k}(1,1)\) and \(\mathbf{A}_{k}(N,N)\) is that the computation always starts from the input/output layer in the forward/backward path. We note that our objective is to analyze the _layer-wise_ gradient properties of AnytimeNNs. Since the singular values of each layer's Jacobian are designed to be around 1 by commonly used initialization schemes (_e.g._, by maintaining uniform gradient variance at all layers), it is reasonable to assign '1' as the weight of each edge (_i.e._, operation) in Eq. 2 if it appears in \(A_{k}\). More details are given in Appendix C. Figure 2: Overview of our proposed DTMC-based analysis (**a**) We model the operations (_e.g._, 1\(\times\)1-Conv) as edges; we model the outputs of such operations (_i.e._, featuremaps) as nodes (_e.g._, \(r,v_{2}\)); here \(r\) and \(d\) are the input and output nodes of the supernet \(G\), respectively. By removing some edges from supernet \(G\), we generate multiple subnetworks \(\{G_{k},\ k=1,...,T\}\). (**b**) We then build the adjacency matrices \(\{\mathbf{A}_{k},\ k=1,...,T\}\) for each subnetwork. We combine these adjacency matrices and the inter-subnetwork coupling matrices \(\{\widetilde{\mathbf{Z}_{i,j}},\ i\neq j\}\) to form the hyper-adjacency matrix \(\tilde{\mathbf{H}}(\lambda)\). (**c**) By solving Eq. (6)(7)(8), we get the TAS value of each node. After finding the path with the highest TPS value by Eq. 9 (Path2), we characterize the important operations (_i.e._, edges) of the AnytimeNN. We provide more examples in Appendix C. At each training step, we sample \(T\) subnetworks as shown in Fig. 2(a). Let \(\mathcal{L}\) denote the loss function (_e.g._, cross-entropy). Then, the loss for AnytimeNNs at each training step is calculated by passing the same batch of images through these \(T\) subnetworks (Huang et al., 2018; Hu et al., 2019): \[Loss=\sum_{k=1}^{T}\mathcal{L}(y,G_{k}(x)) \tag{3}\] where \(x\), \(y\), \(G_{k}(x)\) are the input, ground truth, and output of subnetwork \(G_{k}\), respectively. Eq. 3 shows that all these subnetworks in Fig. 2(a) share the same input data and use the accumulated loss from all of them to calculate the gradient during the backward propagation. Hence, all these subnetworks are highly coupled with each other. Inspired by the idea in (Taylor et al., 2019), as shown in Fig. 2(b), we integrate multiple subnetworks into a new hyper-graph to capture the coupling impacts among different subnetworks. Specifically, given a sequence of \(T\) subnetworks and each subnetwork with \(N\) nodes, we construct a _hyper-adjacency matrix_\(\hat{\mathbf{H}}(\lambda)\in R^{NT\times NT}\): \[\hat{\mathbf{H}}(\lambda)=\begin{bmatrix}\mathbf{A}_{1}&\widetilde{\mathbf{Z}_{1,2}}&... &\widetilde{\mathbf{Z}_{1,T}}\\ \widetilde{\mathbf{Z}_{2,1}}&\mathbf{A}_{2}&...&\widetilde{\mathbf{Z}_{2,T}}\\...&...&...&...\\ \widetilde{\mathbf{Z}_{T,1}}&...&...&\mathbf{A}_{T}\end{bmatrix} \tag{4}\] where \(\widetilde{\mathbf{Z}_{i,j}}\in R^{N\times N}\) is the inter-subnetwork coupling matrix between _different_ subnetworks \(G_{i}\) and \(G_{j}\) as follows: \[\widetilde{\mathbf{Z}_{i,j}}=\lambda\mathbf{I},\;i\neq j\;,\;0<\lambda\leq 1 \tag{5}\] where \(\mathbf{I}\) is the identity matrix. _Remark_: On the one hand, \(\mathbf{A}_{k}\) in \(\hat{\mathbf{H}}(\lambda)\) capture the connectivity pattern of each individual subnetwork. On the other hand, as shown in Fig. 2(a)(b), \(\widetilde{\mathbf{Z}_{i,j}}\) in \(\hat{\mathbf{H}}(\lambda)\) captures the inter-subnetwork coupling effects between every pair of subnetworks by connecting same nodes across different subnetworks1. Hence, our methodology does capture both _intra_- and _inter-subnetwork_ topological properties. This is crucial since AnytimeNNs have a variable network architecture. (see more discussion in Section 4.5 and Appendix B.4). Footnote 1: The parameter \(\lambda\) controls the strength of the interactions between different subnetworks; see details in Sec 4.5. #### 3.1.2 Building the DTMC for AnytimeNNs In this work, we aim to identify the importance of each operation in AnytimeNNs. Inspired by the PageRank algorithm (Berkhin, 2005), we use the _hyper-adjacency matrix_\(\hat{\mathbf{H}}(\lambda)\) to build the transition matrix \(\hat{\mathbf{P}}\) of our DTMC. Specifically, we normalize the adjacency matrix \(\hat{\mathbf{H}}\) row by row: \[\mathbf{P}_{m,:}=\hat{\mathbf{H}}_{m,:}(\lambda)/(\sum\nolimits_{n}\hat{\mathbf{H}}_{m,n }(\lambda)) \tag{6}\] and obtain an irreducible, aperiodic, and homogeneous DTMC, which has a unique stationary state distribution (\(\mathbf{\pi}\)) (Hajek, 2015)2. The stationary distribution \(\mathbf{\pi}\) of such DTMC has the following property: Footnote 2: More details are given in Appendix B.1 \[\mathbf{\pi}\mathbf{P}=\mathbf{\pi} \tag{7}\] Hence, we can solve Eq. 7 to obtain \(\mathbf{\pi}\) for our DTMC. Next, we use \(\mathbf{\pi}\) to analyze the nodes in \(\hat{\mathbf{H}}(\lambda)\). We denote \(\mathbf{\pi}(s)\) as the stationary probability of a state \(s\). Note that, as shown in Fig. 2(a), a node \(r\) appears in all the \(T\) sampled subnetworks, hence it appears \(T\) times in \(\hat{\mathbf{H}}(\lambda)\); each node \(r\) from the supernet \(G\) corresponds to \(T\) nodes \(\{r_{k},\;k=1,...,T\}\) in the DTMC within \(T\) subnetworks \(\{G_{k},\;k=1,...,T\}\). For a given node \(r_{k}\) in the DTMC, we denote its stationary probability as \(\mathbf{\pi}(s_{r_{k}})\). ### Topological & Gradient Properties of AnytimeNNs To analyze the importance of nodes and paths in AnytimeNNs, we propose the following definitions: **Definition 1**.: **Topological Accumulated Score (TAS)** A _topological accumulated score_ of a node \(r\) from the supernet is its accumulated PageRank score across multiple subnetworks. For a given node \(r\) in \(V\), its TAS value \(\mu_{r}\) is: \[\mu_{r}=\sum_{k=1}^{T}\mathbf{\pi}(s_{r_{k}}) \tag{8}\] TAS quantifies the _accumulated probability_ that a node is selected within an AnytimeNN. Next, we use TAS to analyze the importance of various computation paths. **Definition 2**.: **Topological Path Score (TPS)** In an AnytimeNN, we define a computation path \(l\) from a node \(r\) to a node \(d\), as a sequence of edges \(\{r\to v_{1}\rightarrow...\to v_{w}\to d\}\). The _topological path score_\(TPS_{l}\) of a computation path \(l\) is the sum of the TAS values of _all_ nodes traversed in the path: \[TPS_{l}=\sum_{s\in\{r,v_{1},...,v_{w},d\}}\mu_{s} \tag{9}\] The above definitions and the LDI discussion in Section 2 enable us to propose our main result: **Proposition 3.1**.: _Consider an AnytimeNN initialized by a zero-mean i.i.d. distribution with variance \(q\). Given two computation paths \(l_{S}\) and \(l_{L}\) in this AnytimeNN with same width \(w_{r}\) and number of nodes \(D\), we define \(w_{c}^{S}\) (\(w_{c}^{L}\)) as the average degree of \(l_{S}\) (\(l_{L}\)). Assuming \(q\leq\epsilon\), \(w_{c}^{S}\gg w_{r}\), and \(w_{c}^{L}\gg w_{r}\), then the mean singular values \(E[\sigma^{S}]\) and \(E[\sigma^{L}]\) of the Jacobian matrix for \(l_{S}\) and \(l_{L}\) satisfy:_ \[\text{if }TPS_{l_{S}}\leq TPS_{l_{L}}\text{, then }E[\sigma^{S}]\leq E[\sigma^{L}]\leq 1 \tag{10}\] _where, \(\epsilon=\frac{1}{max(w_{e}^{S},w_{e}^{L})+w_{e}+2\sqrt{max(w_{e}^{S},w_{e}^{L})w_{ r}}}\). That is, the mean singular value of the Jacobian for the computation path with higher TPS values is higher and closer to 1._ Proof.: Authors in (Bhardwaj et al., 2021) prove that for a given neural network initialized by a zero-mean i.i.d. distribution with variance \(q\), the mean singular value \(E[\sigma]\) of the Jacobian matrix from the network is bounded by the following inequality: \[\sqrt{qw_{e}}-\sqrt{qw_{r}}\leq E[\sigma]\leq\sqrt{qw_{e}}+\sqrt{qw_{r}} \tag{11}\] where the \(w_{e}\) is the _average node degree_ or _effective width_ and \(w_{r}\) is the real width of the neural network. In Fig. 3, we give an example of how \(w_{e}\) and \(w_{r}\) are calculated in a neural network. We now use the above bounds to prove our main result. Let us first prove the right side of the inequality in Proposition 3.1. According to Eq. 11, for two computational paths \(l_{S}\) and \(l_{L}\), the mean singular values \(E[\sigma^{S}]\) and \(E[\sigma^{L}]\) of their Jacobian matrices are bounded by the following inequalities: \[\sqrt{qw_{e}^{S}}-\sqrt{qw_{r}} \leq E[\sigma^{S}]\leq\sqrt{qw_{e}^{S}}+\sqrt{qw_{r}} \tag{12}\] \[\sqrt{qw_{e}^{L}}-\sqrt{qw_{r}} \leq E[\sigma^{L}]\leq\sqrt{qw_{e}^{L}}+\sqrt{qw_{r}}\] Based on Eq. 12, we note that if initialization variance \(q\) satisfies \[q\leq\frac{1}{max(w_{e}^{L},w_{e}^{S})+w_{r}+2\sqrt{max(w_{e}^{L},w_{e}^{S})w_{ r}}} \tag{13}\] then, the mean singular value is always bounded by 1 for both \(l_{S}\) and \(l_{L}\); that is: \[E[\sigma^{S}]\leq 1\ \ \ \ \ E[\sigma^{L}]\leq 1 \tag{14}\] Inequality 14 proves the right side of inequality in Proposition 3.1. Next, we prove the left side. Using Eq. 12, if \(w_{e}^{S}\gg w_{r}\) and \(w_{e}^{L}\gg w_{r}\), then the mean singular values of \(l_{L}\) and \(l_{S}\) are mainly determined by \(w_{e}^{L}\) and \(w_{e}^{S}\); that is: \[E[\sigma^{L}]=\sqrt{qw_{e}^{L}}\ \ \ \ E[\sigma^{S}]=\sqrt{qw_{e}^{S}} \tag{15}\] From Definition 1, we know that TAS of a given node is the sum of its PageRank across the \(T\) subnetworks. As discussed in (Fortunato et al., 2006), under the mean-field approximation, the PageRank of a given node is linearly correlated to its node degree. That is, for the \(i^{th}\) node \(i\) on the computation path: \[\mu_{i}=\frac{k_{i}}{C} \tag{16}\] where \(k_{i}\) is the node degree for node \(i\), and \(C\) is a constant determined by the topology of supernet3. Because both \(l_{S}\) and \(l_{L}\) are sampled from the same supernet, then they share the same value of \(C\). Combining Eq. 16 with Definition 2, the TPS satisfies the following relation: Footnote 3: A supernet is the given neural architecture that needs to be converted into its AnytimeNN version. \[TPS=\sum_{i=1}^{D}\mu_{i}=\frac{\sum_{i=1}^{D}k_{i}}{C} \tag{17}\] Given the definition of average degree \(w_{e}\), we rewrite Eq. 17 as follows: \[TPS=\frac{D\times w_{e}}{C}\Longrightarrow w_{e}=\frac{C\times TPS}{D} \tag{18}\] where \(D\) is the number of nodes for a given path. By combining Eq. 15 and Eq. 18, the mean singular value is determined by \(q\), \(C\), \(D\), and \(TPS\): \[w_{e}=\frac{C\times TPS}{D}\Longrightarrow E[\sigma]=\sqrt{\frac{q\times C \times TPS}{D}} \tag{19}\] Note that \(q\), \(C\), \(D\) have the same values for both \(f_{S}\) and \(f_{L}\). Hence, if \(TPSS\leq TPSL\), then \(E[\sigma^{S}]\leq E[\sigma^{L}]\). This proves the left side of inequality in Proposition 3.1. Therefore, the inequality in Proposition 3.1 holds true for both the left and right sides. That is, the mean singular value of the Jacobian for a computation path with a higher TPS is higher and closer to 1. Moreover, the closeness of \(E[\sigma]\) to 1 is determined by the initialization variance \(q\), constant \(C\), the values of \(TPS_{S}\) and \(TPS_{L}\), and #nodes \(D\). This completes our proof of Proposition 3.1. Intuitively, Proposition 3.1 says that the computation paths with high TPS values satisfy the LDI property and the gradient magnitude through such paths would not vanish or Figure 3: A 2-layer MLP has three neurons per layer; the right version includes skip connections (purple) across layers, while the left does not. Both MLPs have a real width \(w_{r}\) of 3. The average degree \(w_{e}\) is calculated as the total links (weights and skip connections) divided by total neurons (excluding input neurons). The upper MLP in **(a)** has a \(w_{e}\) of \(18/6=3\), and the lower in **(b)** has a \(w_{e}\) of \(21/6=3.5\) due to skip connections. explode, thus, having a higher impact on AnytimeNNs training. We provide empirical results to verify this in the experiments section. ``` Input: Supernet \(G\), search steps \(m\) Output: Pareto-optimal subnetworks set \(G_{P}\) Search: Initialize \(G_{P}=\phi\) for\(i=1\)to\(m\)do Sample subnetwork \(G_{i}\) from \(G\) Evaluate \(G_{i}\) and get its accuracy \(\Theta_{G_{i}}\) \(optimal=TRUE\) Initialize false-Pareto Set \(G_{P_{out}}=\phi\) for\(G_{j}\) in \(G_{P}\)do if\(FLOPs_{G_{j}}\leq FLOPs_{G_{i}}\)and\(\Theta_{G_{j}}>\Theta_{G_{i}}\)then \(optimal=FALSE\) elseif\(FLOPs_{G_{j}}\leq FLOPs_{G_{i}}\)and\(\Theta_{G_{j}}<\Theta_{G_{i}}\)then Add \(G_{j}\) to \(G_{P_{out}}\) endif endfor if\(optimal\)then Add \(G_{i}\) to \(G_{P}\) endif Remove false Pareto-optimal \(G_{P}=G_{P}\setminus G_{P_{out}}\) endfor ``` **Algorithm 1** Pareto-optimal subnetwork search ### Topologically Important Path Sampling (TIPS) Among the computation paths with the same number of nodes of an AnytimeNN, we define the operations (_i.e._, edges) along the path with the highest TPS value as _important operations_; the rest of operations are deemed as _unimportant operations_ (see Path2 in Fig. 2(c)). According to Proposition 3.1, the path with higher TPS values has higher singular values of the Jacobian matrix. Hence, these important operations have a significant impact on the training process. Note that, previous works treat all operations uniformly (Chin et al., 2021; Wang et al., 2018). Instead, in our approach, we modify the sampling process during the training process and use a higher sampling probability to sample these important operations. We call this sampling strategy _Topologically Important Path Sampling (TIPS)_. More details are given in the experiments section. ### Pareto-Optimal Subnetwork Search After the TIPS-based training, we use the Algorithm 1 to search for the Pareto-optimal subnetworks under various hardware constraints. To this end, we consider the number of floating-point operations (FLOPs) as a proxy for hardware resource constraints4. At runtime, one can select the proper subnetworks to quickly adapt to various hardware constraints; _e.g._, if the amount of currently available memory for a device drops below a threshold, we switch to a smaller subnetwork to meet the new memory budget. Footnote 4: In practice, this can be easily replaced by some other hardware resource, such as memory or power consumption. ### Summary of Our Approach In brief, our method consists of the following steps: * **Step 1: TPS analysis** (Fig. 2) We sample subnetworks and exploit TPS (our DTMC-based metric for AnytimeNNs) to identify important operations. * **Step 2: AnytimeNN training** We use TIPS by assigning a higher sampling probability to the important operations (as given by TPS) to _train_ AnytimeNNs. * **Step 3: Pareto-optimal search** Before model deployment, we do an _offline_ search under _various_ hardware constraints. We store the full supernet and #channel configurations of the obtained subnetworks. _Remarks_: Our framework involves two steps where we perform sampling. In Step 1, we conduct the TAS and TPS analysis without knowing the importance of each operation. Hence, to build the DTMC with the sampled subnetworks (Section 3.1), we uniformly sample these operations and ensure each operation is selected at least once during this stage. Once we compute the TAS and TPS values, we can identify the important and unimportant operations in a given supernet (Sections 3.2 and 3.3). During Step 2 (i.e., AnytimeNN training), operations are not sampled uniformly. Instead, important operations are sampled with a higher probability compared to unimportant operations. We provide more details in Section 4.3. After Steps 1-3, _at runtime_, we use the best subnetwork configurations under various budgets. We provide the storage overhead and time efficiency analysis in Appendix B.8. In terms of **time cost**: Step 1 takes only 39 seconds on a Xeon CPU for MobileNet-v2. Step 2 takes 97 hours on an 8-GPU server for 150 epochs, and Step 3 takes 8 minutes on an RTX3090 GPU. We note that Step 1 only has negligible time costs compared to AnytimeNN training. Moreover, Steps 1-3 are conducted offline and, hence, they result in zero overhead for the online inference. ## 4 Experimental Results ### Experimental Setup In this section, we present the following experiments: (_i_) Verification of Proposition 3.1, (_ii_) Validation of TIPS, and (_iii_) Model complexity vs. accuracy results. For the experiments (_i_) on Proposition 3.1, we build several MLP-based supernets on MNIST dataset by stacking several linear layers with 80 neurons (and adding residual connections between each two consecutive layers), then verify our Proposition 3.1 on these supernets. For the experiments (_ii_) on TIPS, we use TPS and TAS to identify the importance of various operations in several networks (MobileNet-v2, ResNet, WideResNet, ResNext, and EfficientNet) for the ImageNet dataset. We also present the comparisons between the training convergence for our proposed TIPS strategy and the previous SOTA methods (Chin et al., 2021; Yu and Huang, 2019) with the exact same setup (_i.e._, same data augmentation, optimizer, and learning rate schedule). More training details are given in Appendix B.2. Finally, for experiments (_iii_), we take the MobileNet-v2 and ResNet34 trained with TIPS as supernets, and then search for Pareto-optimal subnetworks. We compare the accuracy-FLOPs tradeoffs of the obtained subnetworks with various training strategies. ### Verification of Proposition 3.1 To empirically validate Proposition 3.1, we consider several supernets with 80, 100, 120, and 140 layers (along with residual connections). We then randomly sample 8 subnetworks from these supernets and use Eq. 2, Eq. 4, and Eq. 6 to build the DTMC. After solving Eq. 7, we calculate the TAS for each node (_i.e._, output of various operations). Next, we set the path length to 50 as an example, then randomly sample multiple computation paths with 50 nodes from these supernets and calculate the corresponding {TPS, mean singular value} pairs. As shown in Fig. 5(a), for a supernet with specific depth (_e.g._, 80 layers), higher TPS values always lead to higher mean singular values (closer to 1). These results empirically validate our Proposition 3.1. ### Validation of TIPS In order to verify the effectiveness of our topological analysis, we first explore the relationship between the _important operations_ and the test accuracy for various networks. To this end, before training, we first use our DTMC based framework to obtain the TAS for each node (Eq. 8). Next, among all the computation paths from input to output in the supernet, we find the path that has the highest TPS value (Eq. 9); we mark all operations along this path as important operations. Then, we prune the output channels of each operation _individually_ (with pruning ratios ranging from 1% to 99%), without pruning the channels of any other operation in the network. Meanwhile, we measure the test accuracy of the network after each pruning step. This way, we can analyze the impact of each operation on the test accuracy of a given network. Note that, we prune the last channels first. For example, to prune a convolution layer with 64 (0-63) channels with a pruning ratio of 75%, we directly set the output of the last 75% (16-63) channels to zero. As shown in Fig. 4, for various pruning ratios, pruning the important operations has a much higher accuracy drop than the unimportant ones. These experimental results show that the important operations found by our framework have a significant impact on the test accuracy of AnytimeNNs. Therefore, the proposed TAS and TPS metrics clearly identify the important operations and computation paths in AnytimeNNs. Next, we evaluate the impact of TIPS on _training convergence_ of the AnytimeNNs. For this experiment, we train MobileNet-v2 with width-multiplier 1.4 on ImageNet dataset with (_i_) SOTA training strategies: Joslim, US-Nets, and DS-Net (Chin et al., 2021; Yu and Huang, 2019; Li et al., 2021), and (_ii_) our proposed TIPS. As explained in Section 3.2, a higher sampling probability for important operations helps more with the training process. However, if the sampling probability for important operations is too high, it hurts the diversity of sampled subnetworks. In the extreme case, we always end up sampling and training only the important operations, while the unimportant ones never get sampled and trained; this can hurt the test accuracy of AnytimeNNs. Hence, for our proposed TIPS strategy, we use a 50% higher sampling probability for important operations compared to unimportant operations. For example, if 40% of the output channels of unimportant operations are sampled, then 60% of the output channels of important operations are sampled (since 40%\(\times\)(1+0.5)=60%). We note that previous methods (_e.g._, Joslim and US-Nets) use a uniform sampling for every subnetwork, _i.e._, the same sampling probability for all operations during the training process. In contrast, TIPS focuses more on important operations thus improving the LDI properties. As shown in Figure 4: Pruning ratio of important and unimportant operations (as identified by TAS/TPS) vs. mean test accuracy on ImageNet (std. dev. shown with shade) on EfficientNet-B0, ResNet18 and MobileNet-v2. More results are given in Fig. 8 in Appendix A. Fig. 5(b,c), by changing the sampling strategy, our TIPS based-training achieves a much faster training convergence for the supernet compared to Joslim and US-Nets. Hence, this validates that TIPS results in better trainability of the supernet. ### Pareto-Optimal AnytimeNN Search We use the Algorithm 1 to search for Pareto-optimal subnetworks under various hardware constraints for MobileNet-v2 and ResNet34. After the search, we evaluate the obtained Pareto-optimal subnetworks and get their real test accuracy. Table 1 demonstrates that our proposed TIPS achieves significantly higher accuracy than SOTA given similar FLOPs for ImageNet on MobileNet-v2. For example, assuming the hardware constraint is 500M FLOPs, TIPS has a 1.4%-2% higher accuracy on ImageNet than the SOTA; on Tiny-ImageNet with an 80M FLOPs budget, TIPS has 6.1%-6.6% higher accuracy than SOTA. Moreover, as shown in Table 2, given the ResNet34 supernet with a 3.6G FLOPs budget on ImageNet, TIPS achieves 1.4%, 1.5%, and 1.9% higher test accuracy than Joslim, US-Nets and DS-Net, respectively (Chin et al., 2021; Yu and Huang, 2019; Li et al., 2021). On CIFAR100 dataset with a 120M FLOPs budget, TIPS has 1.5%, 2.9%, and 4.2% higher accuracy Joslim, US-Nets and DS-Net, respectively. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline \multicolumn{6}{|c|}{**CIFAR100**} \\ \hline FLOPS & 20M & 30M & 35M & 40M & 45M & 50M \\ \hline US-Nets & 61.5 & 62.9 & 64.8 & 65.5 & 65.6 & 66.5 \\ \hline Joslim & 62.0 & 62.7 & 63.1 & 63.7 & 64.1 & 65.0 \\ \hline DS-Net & 61.8 & 63.8 & 64.8 & 65.3 & 65.5 & 66.7 \\ \hline **TIPS (Ours)** & **66.4** & **66.9** & **67.0** & **67.6** & **67.7** & **68.2** \\ \hline \hline \multicolumn{6}{|c|}{**Tiny-ImageNet**} \\ \hline FLOPS & 80M & 120M & 140M & 160M & 180M & 200M \\ \hline US-Nets & 47.0 & 47.3 & 48.3 & 49.0 & 50.2 & 51.4 \\ \hline Joslim & 47.4 & 47.9 & 48.7 & 49.5 & 50.3 & 50.7 \\ \hline DS-Net & 46.9 & 47.4 & 48.1 & 48.7 & 50.3 & 50.8 \\ \hline **TIPS (Ours)** & **53.5** & **53.8** & **54.0** & **54.4** & **54.9** & **55.1** \\ \hline \hline \multicolumn{6}{|c|}{**ImageNet**} \\ \hline FLOPS & 260M & 320M & 400M & 450M & 500M & 600M \\ \hline US-Nets & 70.6 & 71.6 & 71.8 & 72.1 & 72.3 & 72.9 \\ \hline Joslim & 70.8 & 71.9 & 72.5 & 72.7 & 72.9 & 73.4 \\ \hline DS-Net & 70.6 & 72.1 & 72.5 & 72.6 & 73.0 & 73.3 \\ \hline **TIPS (Ours)** & **71.8** & **73.2** & **73.6** & **74.0** & **74.3** & **74.7** \\ \hline \end{tabular} \end{table} Table 1: Comparison of Top-1 test accuracy vs. FLOPS (Million [M]) with SOTA training methods on MobileNet-v2. The best results are shown with bold fonts. The results are averaged over three runs. The std. dev. values are given in Table 4 in Appendix B.3. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline \multicolumn{6}{|c|}{**CIFAR100**} \\ \hline FLOPS & 120M & 180M & 200M & 220M & 240M & 260M \\ \hline US-Nets & 63.1 & 63.9 & 64.4 & 64.8 & 65.0 & 65.4 \\ \hline Joslim & 65.8 & 66.2 & 66.7 & 67.0 & 67.3 & 67.4 \\ \hline DS-Net & 64.4 & 65.9 & 66.2 & 66.4 & 66.5 & 66.6 \\ \hline **TIPS (Ours)** & **67.3** & **67.4** & **67.8** & **67.9** & **68.1** & **68.2** \\ \hline \hline \multicolumn{6}{|c|}{**Tiny-ImageNet**} \\ \hline FLOPS & 130M & 190M & 220M & 250M & 270M & 300M \\ \hline US-Nets & 42.9 & 43.2 & 44.3 & 44.7 & 44.9 & 45.2 \\ \hline Joslim & 44.9 & **45.0** & 45.3 & 45.4 & 45.5 & 45.8 \\ \hline DS-Net & 41.8 & 43.0 & 43.8 & 43.9 & 44.1 & 44.2 \\ \hline **TIPS (Ours)** & **44.1** & 44.6 & **45.4** & **45.8** & **45.9** & **46.0** \\ \hline \hline \multicolumn{6}{|c|}{**ImageNet**} \\ \hline FLOPS & 1.5G & 2.26 & 2.8G & 3.0G & 3.2G & 3.6G \\ \hline US-Nets & 67.8 & 69.2 & 69.7 & 70.1 & 70.2 & 70.5 \\ \hline Joslim & 68.0 & **69.4** & 69.6 & 70.0 & 70.2 & 70.4 \\ \hline DS-Net & 66.0 & 67.0 & 68.8 & 69.4 & 69.9 & 70.0 \\ \hline **TIPS (Ours)** & **68.4** & 69.3 & **70.8** & **71.1** & **71.4** & **71.9** \\ \hline \end{tabular} \end{table} Table 2: Comparison of Top-1 test accuracy vs. FLOPS (Million/Giga [M/G]) with SOTA training methods on ResNet34. The best results are shown with bold fonts. The results are averaged over three runs. The std. dev. values are shown in Table 5 in Appendix B.3. Figure 5: **(a)** Mean singular values (\(E[\sigma]\)) vs. TPS for various supernets (MLPs with various #layers) on MNIST. Clearly, paths with higher TPS values have higher \(E[\sigma]\) for a specific supernet (_e.g._, 80-layers). **(b, c)** Training loss of MobileNet-v2 (MBN-v2) based supernet vs. #Epochs on ImageNet and Tiny-ImageNet. TIPS requires much fewer epochs to achieve a target training loss. For example, on ImageNet, to make the training loss less than 1.25 on ImageNet, US-Nets takes 145 epochs while TIPS only needs 88 epochs. **Latency vs. Accuracy Trade-off** Besides FLOPs vs. accuracy, we also compare the latency vs. accuracy tradeoff of subnetworks obtained by TIPS and Joslim. As shown in Table 3, TIPS achieves higher accuracy than Joslim, given a similar latency. For example, assuming the latency constraint is around 300ms, TIPS has a 1.1% higher accuracy on ImageNet than the Joslim. ### Ablation Studies **Stability of TPS analysis** We vary the #sampled subnetworks from 2 to 20 for MobileNet-v2. As shown in Fig. 6, four subnetworks are typically enough to make the TPS value converge. In practice, we sample 8 subnetworks and the standard deviation of TPS values is less than 2.5% of the mean values. **Effect of \(\lambda\)** Finally, we fix the #sampled subnetworks to 8 and vary the \(\lambda\) value in the hyper-adjacency matrix (Eq. 4) from 0.1 to 1 for MobileNet-v2. As shown in Fig. 7, the ranking among different paths remains the same under various \(\lambda\) values. Therefore, our approach is robust to \(\lambda\) values variation. In our approach, we set the value of \(\lambda\) to '1'. We discuss this in Appendix B.4. ### Limitations and Future Work Our current framework (TIPS) has been primarily verified on CNNs with variable width and depth. We plan to explore it with other AnytimeNNs (_e.g._, multi-branch and early-exit networks) and other types of networks (_e.g._, transformers and graph neural networks). Also, in the current version, the hardware constraints are considered _after_ the supernet training; we intend to consider incorporating hardware awareness into the training process as well. ## 5 Conclusion In this work, we have proposed a new methodology to _automatically design the AnytimeNNs_ under various hardware budgets. To this end, by modeling the training process of AnytimeNNs as a DTMC, we have proposed two metrics - TAS and TPS - to characterize the important operations in AnytimeNNs. We have shown that these important operations and computation paths significantly impact the accuracy of AnytimeNNs. Based on this, we have proposed a new training method called _TIPS_. Experimental results show that TIPS has a faster training convergence speed than SOTA training methods for anytime inference. Our experimental results demonstrate that our framework can achieve SOTA accuracy-FLOPs trade-offs, while achieving 2%-6.6% accuracy improvements on CIFAR100, Tiny-ImageNet and ImageNet datasets compared to existing approaches for anytime inference. ## Acknowledgement This work was supported in part by the US National Science Foundation (NSF) grants CNS-2007284 and CCF-2107085. \begin{table} \begin{tabular}{|c|c||c|c|c|c|c|} \hline Method & Metric & \multicolumn{5}{c|}{Results} \\ \hline \hline \multirow{2}{*}{Joslim} & Latency (ms): & 176 & 232 & 305 & 341 & 406 \\ \cline{2-7} & Accuracy (\%) & 70.8 & 71.9 & 72.5 & 72.9 & 73.4 \\ \hline \hline \multirow{2}{*}{**TIPS (Ours)**} & Latency (ms): & 190 & 245 & 298 & 362 & 413 \\ \cline{2-7} & Accuracy (\%) & 71.8 & 73.2 & 73.6 & 74.3 & 74.7 \\ \hline \end{tabular} \end{table} Table 3: Top-1 test accuracy vs. latency of MobileNet-v2 on ImageNet for RaspberryPi-3B+. The results are averaged over three runs. Figure 6: Stability of TPS values vs. #subnetworks over 10 runs (std. dev. shown with shade) for MobileNet-v2. The variation is negligible when #subnetworks is larger than 4. Figure 7: TPS values vs. \(\lambda\) values for MobileNet-v2. The ranking among different paths remains the same for various \(\lambda\) values.
2307.08880
Modular Neural Network Approaches for Surgical Image Recognition
Deep learning-based applications have seen a lot of success in recent years. Text, audio, image, and video have all been explored with great success using deep learning approaches. The use of convolutional neural networks (CNN) in computer vision, in particular, has yielded reliable results. In order to achieve these results, a large amount of data is required. However, the dataset cannot always be accessible. Moreover, annotating data can be difficult and time-consuming. Self-training is a semi-supervised approach that managed to alleviate this problem and achieve state-of-the-art performances. Theoretical analysis even proved that it may result in a better generalization than a normal classifier. Another problem neural networks can face is the increasing complexity of modern problems, requiring a high computational and storage cost. One way to mitigate this issue, a strategy that has been inspired by human cognition known as modular learning, can be employed. The principle of the approach is to decompose a complex problem into simpler sub-tasks. This approach has several advantages, including faster learning, better generalization, and enables interpretability. In the first part of this paper, we introduce and evaluate different architectures of modular learning for Dorsal Capsulo-Scapholunate Septum (DCSS) instability classification. Our experiments have shown that modular learning improves performances compared to non-modular systems. Moreover, we found that weighted modular, that is to weight the output using the probabilities from the gating module, achieved an almost perfect classification. In the second part, we present our approach for data labeling and segmentation with self-training applied on shoulder arthroscopy images.
Nosseiba Ben Salem, Younes Bennani, Joseph Karkazan, Abir Barbara, Charles Dacheux, Thomas Gregory
2023-07-17T22:28:16Z
http://arxiv.org/abs/2307.08880v1
# Modular Neural Network Approaches for Surgical Image Recognition ###### Abstract Deep learning-based applications have seen a lot of success in recent years. Text, audio, image, and video have all been explored with great success using deep learning approaches. The use of convolutional neural networks (CNN) in computer vision, in particular, has yielded reliable results. In order to achieve these results, a large amount of data is required. However, the dataset cannot always be accessible. Moreover, annotating data can be difficult and time-consuming. Self-training is a semi-supervised approach that managed to alleviate this problem and achieve state-of-the-art performances. Theoretical analysis even proved that it may result in a better generalization than a normal classifier. Another problem neural networks can face is the increasing complexity of modern problems, requiring a high computational and storage cost. One way to mitigate this issue, a strategy that has been inspired by human cognition known as modular learning, can be employed. The principle of the approach is to decompose a complex problem into simpler sub-tasks. This approach has several advantages, including faster learning, better generalization, and enables interpretability. In the first part of this paper, we introduce and evaluate different architectures of modular learning for Dorsal Capsulo-Scapholunate Septum (DCSS) instability classification. Our experiments have shown that modular learning improves performances compared to non-modular systems. Moreover, we found that weighted modular, that is to weight the output using the probabilities from the gating module, achieved an almost perfect classification. In the second part, we present our approach for data labeling and segmentation with self-training applied on shoulder arthroscopy images. Data Science Machine Learning Modular Neural Networks Self-training Dorsal Capsulo-Scapholunate Septum Shoulder arthroscopy. ## 1 Introduction Cognitive neuropsychology has been a great inspiration and the key behind the success of data science. Our brains are the most powerful machines, that's why they have been trying to replicate how it works and how it solves a problem into machines. This was the main reason to the idea of neural networks and their ability to handle complex problems. Continuing to explore the human mind, some researchers focused their attention on how our brain is subdivided into independent components. Each subset of neurons specializes in a function, and they interact with each other [12]. Thereby, the beginning of using modular architecture in learning. The main idea is to decompose a complex task into simple independent sub-tasks. It was first introduced in [20] when they demonstrated that dividing processing produces better performance under certain conditions. Later, other researchers revealed that adding a priori knowledge helps the model improve the results compared to non-modular. And in recent research [15] stated that modularity leads to better generalization, faster learning, possesses scaling properties, and allows interpretability. Therefore, the rising interest among researchers. The modular architecture is a concept, therefore, it exists several strategies to decompose a problem. For instance, models can compete with each other to complete the same task [1], or they can cooperate [18], etc. However, it may not always be evident to determine how and where to subdivide. Another concept that has shown remarkable results, is the semi-supervised algorithm self-training. With the urging need for a massive number of data to train models and the difficulty or high cost of labeling data, semi-supervised learning became an active research discipline. Self-training alleviates this problem by relying on a small subset of labeled samples and a large subset of unlabeled samples, and still achieves state-of-the-art results [5]. It assumes that the decision boundary should lie in a low-density region to obtain better generalization [7]. Pseudo-labeling, that is annotating the unlabeled samples iteratively, we call them pseudo-labels, and adding them to the training subset, is a common use of self-training. In this work, we built systems using modular learning and self-training and applied them in two contexts. The first application aims to classify instability in a wrist structure named Dorsal Capsulo-Scapholunate Septum (DCSS). We tested different strategies of modular learning and compared them to the non-modular architecture. In the second application, we tested pseudo-labeling on shoulder arthroscopy images to detect structures using image segmentation. To this end, we describe a brief overview of the background and investigate the related works on both modular learning and self-training in the first section. Then in the second section, we present our proposed approach for classifying DCSS using modular learning and display the obtained results. Finally, we present the self-training approach for data labeling and segmentation. ## 2 Fundamental background of the proposed approach and related work In this section, we will introduce some basic background on modular learning and self-training methods and give an overview on the previous approaches in the literature. ### Modular learning According to neuroscience, our brain is an amalgamation of several modules operating independently and they cooperate and communicate with each other [25]. This decomposition, inspired researchers to build a system composed of different simple modules and then integrate their outputs. Each module is a neural network with learnable parameters and output a probability of prediction. Modules are trained independently and are controlled by a gating module. The gating module is responsible to activate modules according to a certain rule, either set by the user or found automatically. There exist several ways to set the interaction between the different modules. For instance, one can fragment a complex problem into smaller sub-tasks. Each module is responsible for a fragment, and they cooperate between each other to create a complementary network. Or modules can compete between each other by learning to solve the same problem. Using modular learning, offers major advantages comparing to non-modular system [15]. For instance, using multiple simple networks might significantly reduce the computational time, usually needed to train a complex model. It has been shown that modular systems improves substantially the robustness of neural networks[24]. Combining knowledge from different modules makes the model more robust. It also reduces the model's complexity, thus optimized computation time in the training process. Moreover, the modularity limits the propagation of biases, contrary to a non-modular model. In 1989, Rueckl, [20] began to study the representation and location of objects of "what" and "where" to separate. They used retinal images for classification and revealed that modular learning is more efficient in computation and learned more easily compared to non-modular network under certain circumstances. There exists several strategies to build modular architecture depending on the type of interaction between the structures. For instance, in [2], they use decoupled modules where data is separated and each group is assigned to a module. The final prediction is the maximum probability from all modules. In [18], they used hierarchical network. Others [1], and [3], use ensemble networks where all the modules are identical, and they learn to perform the same task. The final prediction is a majority vote of prediction probability from all the networks. The most common structure is the Multiple-experts network [11]. It consists of gating module and experts. The gating module controls the activation of the networks and distributes the tasks. The networks do not need to be identical it can be a hybrid system with a variety of techniques [4]. We used multiple experts in our case. ### Self-Training Supervised learning using neural networks has seen success in several applications. To ensure a better generalization, neural networks require to be trained on a huge amount of labeled data. However, labeling data can be an expensive and time-consuming task and in some cases not accessible. Thus, the rise of a domain called semi-supervised. This domain aims to use both labeled and unlabeled data from the same or similar distribution for a specific problem. There exist several semi-supervised methods. Self-training is a common semi-supervised method. Its increasing interest is due to its better performance compared to supervised learning. This result is somehow surprising, given that we don't add any external information. Theoretical works have proven that self-training engenders a better generalization bound given certain conditions. Let \(N\) be the number of labeled data and \(M\) the number of unlabeled data. An initial model \(f_{\mathbf{w}^{(0)}}\) is trained from \(\left\{\mathbf{x}_{n},y_{n}\right\}_{n=1}^{N}\). Given the unlabeled data \(\left\{\widetilde{\mathbf{x}}_{m}\right\}_{m=1}^{M}\) The obtained model plays the role of pseudo-labeler by computing \(\hat{y}_{m}=\hat{f}_{\mathbf{w}^{(0)}}\left(\mathbf{x}_{m}\right)\). Let \(k\) be the number of selected predicted labels and let \(\ell\) be the number of iterations. Then \(f_{\mathbf{w}^{(\ell)}}\) is trained from \(\left\{\mathbf{x}_{n},y_{n}\right\}_{n=1}^{N}\cup\left\{\mathbf{x}_{m},\hat{y}_{m} \right\}_{m=1}^{k}\) and is used as pseudo-labeler for the next iteration. Reiterate until reaching stopping criteria. The empirical risk to be minimized is as follows \[\hat{R}(\mathbf{W}) =\frac{\lambda}{2N}\sum_{n=1}^{N}\left(y_{n}-f\left(\mathbf{W};\mathbf{x} _{n}\right)\right)^{2}\] \[+\frac{\tilde{\lambda}}{2M}\sum_{m=1}^{M}\left(\hat{y}_{m}-f\left( \mathbf{W};\hat{\mathbf{x}}_{m}\right)\right)^{2} \tag{1}\] where \(\lambda+\tilde{\lambda}=1\). Similar to modular training, self-training was used decades ago from 1976 to train a model without external knowledge [9]. And in these recent years, researchers have shown an interest in self-training given its ability to train the model with few labeled data and a lot of unlabeled ones. The strategy of choosing pseudo-labels differs from work to another. For instance, [14], selected the data having the maximum prediction probability in the current model tested on denoising auto-encoder,[22] selected the nearest observations in feature space. Others [27], [23], selected the proportion of the most confident data instead of setting a threshold. The preceding research has supposed all the classes are equals and uses a constant threshold to sort out the most confident unlabeled data to add to the training set. That is what motivated the use of dynamic threshold to prevent ignoring certain unlabeled data. [26], suggested adapting curriculum learning in self-training and called it curriculum pseudo labeling (CPL). Curriculum learning is an approach based on training the model starting with the data easier to learn to the more complex ones. The strategy of CPL is to adapt the threshold for each class and select unlabeled data depending on the model's learning status. They compute performances for different classes and adjust the threshold accordingly. Meaning, if the results in a certain class are not high, the threshold ought to be lower. The authors demonstrated that their method converged faster than fixed threshold methods and achieved good results on state-of-the-art benchmarks of semi-supervised learning. Another use of curriculum labeling in [6] achieved significant results. They observed that the distribution of the maximum probability prediction follows the Pareto distribution, that is, a skewed distribution. The idea is to choose unlabeled data by taking a percentile of the distribution of the maximum probability for pseudo-labeled samples. ## 3 Proposed approaches: Modular learning for classification ### Motivation The first purpose of this work is to classify images of the Dorsal Capsulo-Scapholunate Septum (DCSS) to identify the stage of scapholunate instability using a modular architecture. DCSS is a structure in the wrist that ensures a scapholunate articulation [16]. A common disease in DCSS is scapholunate instability. It causes severe wrist dysfunction, making it difficult for the patient to perform everyday activities [13]. Identifying the scapholunate instability requires an accurate diagnosis in order to prevent it in the early phases. We used images from wrist arthroscopy and the goal is to determine whether the structure is healthy or if it exists instability in stage 1, stage 2, or stage 3. We exploited the nature of the problem to build a modular system. We can divide the problem into, first, predicting if the structure is healthy or pathological. Then, if the DCSS is pathological, we can then use a second component to discriminate between the three stage. ### System architecture The system is composed of two modules; the gating module and the discriminative module. The gating module is trained to detect if the DCSS is healthy or not from the arthroscopy image. If the DCSS is healthy, we output its prediction probability \(P(y=\text{`healthy'}|X)\). If the DCSS is pathological, we feed the image to the discriminative module. The latter can be designed in different ways, and the final output can be computed differently. We choose to test three architectures (modular, modular 1 Vs 1, and weighted modular). In the following, we will detail the different cases. Let \(H\) be the event 'healthy', \(\bar{H}\)' pathological' and \(S_{i}\), \(i\in\{1,2,3\}\) the stages of instability. Modular (All)In this case, the discriminative module is trained to distinguish between the three stages of instability. Given the image predicted by the gating module to be pathological, the discriminative module computes the prediction probability \(P(S_{i}|X,\bar{H}),\quad i=1,\ldots,3\). The discriminative module's prediction is as follows: \[\arg\max_{i}P(S_{i}|X,\bar{H}) \tag{2}\] Modular (1 Vs 1)In this case, we built three experts competing with each other. The first module is intended to discriminate between stage 1 and 2, The second for the stage 1 and 3, and, finally, one for the stage 2 and 3. The output is the maximum from each expert. The three modules computes the probabilities in equation 3 with i= {1,2}, {1,3} and {2,3} respectively. \[P_{m}(S_{i}|X,\bar{H})\times P_{m}(\bar{H}|X)+P_{m}(S_{i}|X,H)\times P_{m}(H| X). \tag{3}\] The discriminative module's prediction is as follows: \[\arg\max_{(i,k)}P_{m_{k}}(S_{i}|X),\quad i=1,\ldots,3\quad k=1,\ldots,3. \tag{4}\] Weighted modularThis system is similar to the first modular case, except for computing the output. \(P(S_{i}|X,\bar{H})\times P(\bar{H}|X)+P(S_{i}|X,H)\times P(H|X),\quad i=1, \ldots,3\). In this case, the gating module assigns a weight to each output and picks the output that yields the highest prediction probability. The advantage of this method is to take into account the error in the gating module. In the first system, we supposed that the gating module is perfect opposing to this architecture. For instance, if the difference between the prediction probability of the gating expert is not large, it means that the module is not confident. By assigning weights to the outputs with the gating module probabilities, we add this information to the prediction. The figure 1 represents the weighted modular architecture. The discriminative module's prediction is as follows: \[\arg\max_{i}P(S_{i}|X),\quad i=1,\ldots,3. \tag{5}\] Figure 1: Architecture of weighted modular learning. #### 3.2.1 Learning phase In each module, the network is composed of a pretrained model ResNet18 and a fully connected layer. The architecture of ResNet18 is illustrated in figure 2. It is composed of 18 deep layers. Every four layers are similar, and each layer is composed of two blocks of deep residual network. The latter characterizes by adding an identity connection, meaning if the output of the layer is \(F(x)\), the output of the block is \(F(x)+x\) followed by the activation function ReLU. The residual network is an attempt to solve the problem of vanishing gradient, that is, when the gradient reaches zero quickly [10]. And the final layer consists of average pooling followed by a fully connected layer and a softmax. ### Experiments #### 3.3.1 Dataset The images used in this work were sampled from arthroscopic videos on Dorsal Capsulo-Scapholunate Septum (DCSS) performed at Avicenne Hospital. The collected data includes 840 photos in total, including 105 healthy DCSS images (12.5%), 173 (20,6%) in stage 1, 187 (22.2%) in stage 2, and 375 in stage 3 (44.64%). Entry from each patient was equally distributed. In order to maintain a balance between the classes, we have expanded the training set using image manipulations such as random rotations, adjusting the brightness, zooming in on the image, etc. After transformation, we were able to obtain 3076 healthy DCSS images, 1111 in stage 1, 1126 in stage 2, and 1279 in stage 3. The image's size was then resized to \(224\times 224\). The figure 3 presents samples from the dataset. ### Setting We used 80% (6592) of the dataset for training and 20% (252) for testing. We used stochastic gradient descent to optimize the model and a learning rate of \(0.001\). We trained for 10 epochs. We used the same hyperparameters for both models. The first model was trained on all the training datasets labeled healthy, not healthy and the second model was trained on only non-healthy DCSS. #### 3.4.1 Results and discussion We evaluated the three types of modular learning using accuracy with an estimation of the confidence interval. The confidence interval was computed as explained in [4]. They assume that the probability distribution of the number of Figure 3: Visualization examples of DCSS images. Figure 2: ResNet18 architecture[17]. errors is binomial, so we calculate the interval according to the formula below \[I_{\alpha}=\frac{P+\frac{Z_{\alpha}^{2}}{2N}\pm Z_{\alpha}\sqrt{\frac{P(1-P)}{N}+ \frac{Z_{\alpha}^{2}}{4N^{2}}}}{1+\frac{Z_{\alpha}^{2}}{N}}, \tag{6}\] where \(N\) is the number of observations in the test set, \(P\) is the probability that an observation is an error, and, \(\alpha\%\) is the confidence level, usually set to \(95\%\), \(Z_{\alpha}\) is a value given by the normalized centered Gaussian distribution (\(Z_{95\%}=1.96\) and \(Z_{90\%}=2.48\)). The table 1 summarizes the results obtained. As we can see in the table, modular learning increased the accuracy from \(97.6\%\) to \(98.4\%\) and a tighter confidence interval of \([97.79\%;100\%]\) with an almost perfect gating module. These results demonstrate that adding a priori knowledge from the modular nature of the problem allowed a better generalization and a finer decision boundary than the non-modular architecture. The decomposition of the task makes the problem less complex, thus a bigger probability to converge towards the ground truth function. The experiments in 1, 2 also showed that weighted modular had better performance than the first case modular architecture with \(98.81\%\) accuracy. This indicates that adding probabilities from the gating module calibrates the results. We investigated the distribution of the difference between the prediction probability of the different classes in the discriminative module. We can see in figure 4 that the median difference is almost one, which proves that the discriminative module is confident in its decision. We can also notice a high difference between the prediction probability of stage 2 and 3 meaning that the model is able to distinguish between the two stages confidently. However surprisingly, the probabilities between stages 1 and 3 are relatively small. By analyzing the images further, we notice the presence of redness in both stages, which could be the reason for such results. Since the problem is medical, it is important to explain the choice made by the model. However, neural network is a black box. Therefore, we attempted to evaluate two methods to interpret the results using a heatmap on images. The pixels with the higher contributions to the neural network's prediction are located in hotter areas. The first method is called GradCam (gradient-weighted class activation mapping), which creates visual explanations [21]. Grad-CAM creates a coarse localization map that highlights key areas in the image by using the gradients of any target concept and flowing into the final convolutional layer. This makes it possible to visualize the results of various CNN model layers. The second approach used for explainability is Sobol attribution method [8]. It relies on sensitivity analysis techniques commonly used in physics and economics, in particular the Sobol indices. The method, not only does it identify the contribution of each image regions to the neural network's decision but also, the sobol indices are able to estimate High-order interactions between the regions in the image and their contributions. To estimate the indices, the authors used Jansen estimator with Quasi-Monte Carlo for sampling. First, they sample masks from the sequence, apply them to the image with a perturbation operator, then obtain prediction scores by forwarding the obtained image to the system. The explanations are then computed using the masks and the prediction scores. In figure 5 and 6, we evaluate displays the visual output of the discriminative model of the two methods. We notice that the GradCam provides a continuous region, and we notice a certain pattern for each class, while the Sobol attribution method gives a finer heatmap. In both cases, both methods were able to locate the rip in the DCSS in Stage 3. ## 4 Proposed approaches: Self-training for data labeling and segmentation ### Motivation The main objective of this section is to, first, label arthroscopy images using self-training and then detect and identify structures from arthroscopy videos with a modular architecture. We used videos of shoulder arthoscopy. In one second \begin{table} \begin{tabular}{|c||c|c|} \hline Architecture & Accuracy (\%) & Confidence interval(\%) \\ \hline Non-modular & 97.6 & [96.52 ; 100] \\ \hline Modular (All) & 98.4 & [97.79 ; 100] \\ \hline Modular (1 Vs 1) & 94.4 & [91.1 ; 100] \\ \hline Weighted modular & 98.81 & [98.4 ; 100] \\ \hline \end{tabular} \end{table} Table 1: Average accuracy on different modular architecture of video, we can obtain from 25 to 30 images per second. Therefore, several minutes in a video can produce several images, thus labeling these data can be difficult and time-consuming. The idea is to select few pictures from each video and apply the self-training process for pseudo-labeling, that is, producing predicted labels from a small set of labeled data and adding them to the training dataset, of the rest of the video. This way, we exploit all the available data we have. ### System architecture The model for the self-training (i.e. pseudo-labeler) is built using a segmentation network. It takes as input an arthroscopic image and its ground truth mask with 5 classes (background, long biceps tendon, subscapularis tendon, supraspinatus tendon, cartilage (glenoid and humeral)), and it outputs a prediction mask. We have found while training the segmentation task, that the model was able to perform a binary segmentation, meaning identifying the structure versus the background. However, it fails to discriminate between the different classes. That's what motivates the use of \begin{table} \begin{tabular}{|l|r|r|r|r|r|} \hline & \multicolumn{4}{|c|}{Predicted result} & \multicolumn{1}{c|}{Recall (\%)} \\ \hline & Healthy & Stage 1 & Stage 2 & Stage 3 & \\ \hline Ground truth & Healthy & \(30\) & \(0\) & \(0\) & \(0\) & \(100\) \\ \cline{2-6} & Stage 1 & \(2\) & \(60\) & \(1\) & \(0\) & \(95.23\) \\ \cline{2-6} & Stage 2 & \(0\) & \(0\) & \(62\) & \(0\) & \(100\) \\ \cline{2-6} & Stage 3 & \(0\) & \(0\) & \(0\) & \(97\) & \(100\) \\ \hline Precision (\%) & & \(93.75\) & \(100\) & \(98.41\) & \(100\) & \\ \hline \end{tabular} \end{table} Table 2: Evaluation matrix of weighted modular on test set Figure 4: Distribution of difference in prediction. Figure 5: Interpretation of model using GradCam. structure. Then, we remove the predicted background from the initial image and feed it to the second network. The latter is trained to learn to discriminate between the four classes. The self-training phase, consists in, first, training the two models with the available annotated images. Then, we predict the mask for all the images in the unlabeled set. Afterward, we select only the images that produced a mean prediction probability superior to a threshold. We set another threshold to filter the selected outputs, pixel by pixel, according to their prediction probability. The experiments have shown that adding the second filter to the predicted images increases the accuracy. Let \(\hat{y}_{m}\) be a predicted image from the model. \(\hat{y}_{m_{i,j}}\) represents the pixel at position \((i,j)\). Let \(T\in[0,1]\) be the threshold and the Softmax function is defined as \(S:\mathbb{R}^{K}\rightarrow(0,1)^{K}\) where \(K\geq 1\); \(S(\mathbf{x}_{i})=\frac{e^{x_{i}}}{\sum_{j=1}^{K}e^{x_{j}}}\) for \(i=1,\ldots,K\) and \(\mathbf{x}=(x_{1},\ldots,x_{K})\in\mathbb{R}^{K}\). The image is filtered as follows; \[\hat{y}_{m_{i,j}}=\begin{cases}\hat{y}_{m_{i,j}},&\text{if }\arg\max_{(i,j)}S( \hat{y}_{m_{i,j}})\geq T\\ 0,&\text{otherwise.}\end{cases} \tag{7}\] We add the final outputs to the training set and reiterate until reaching the stopping criteria; either all the images are annotated or the algorithm reached the maximum number of iteration. Let \(P\in[0,1]\) be a threshold, then in each iteration the set of selected images is as following: \[\hat{I}\coloneqq\{\hat{f}_{\mathbf{w}^{(\ell)}}(\mathbf{x}_{m})\mid\arg\max_{\mathbf{x}_{ m}}S(\hat{f}_{\mathbf{w}^{(\ell)}}(\mathbf{x}_{m}))\geq P\} \tag{8}\] and the output prediction is defined as follows: \[\hat{y}_{m}=\delta_{\arg\max_{(i,j)}S(I_{m_{i,j}})\geq T}\hat{I}_{m};\quad 0 \leq m\leq n(\hat{I}), \tag{9}\] where \(n(\hat{I})\) is the cardinality of the set \(\hat{I}\). #### 4.2.1 Learning phase The architecture of each module is inspired by UNet [19]. The choice of this particular pretrained model is motivated, first, by its common use in medical applications. The second key is its ability to perform well with few labeled images [19]. The architecture of UNet is U-shaped. It consists of an encoder (contracting path) and a symmetric decoder (expanding path). The contracting path consists of 2 blocks of \(3\times 3\) convolution followed by the activation function rectified linear unit (ReLU) then a \(2\times 2\) max pooling. These operations are repeated five times. The number of channels doubles in each time (64, 128, 256, 512, 1024). The expanding path consists of the same blocks and ReLU and uses \(2\times 2\) for the up convolution. The final layer is a \(1\times 1\) convolution. The figure 7 displays the UNet architecture. Figure 6: Interpretation of model using Sobol attribution method. ### Experiments #### 4.3.1 Dataset The data of our second study were collected by video recording of shoulder arthroscopies performed at Avicenne Hospital between November 2021 and June 2022. The videos of arthroscopies could be retrieved directly from the arthroscopy video console in mp4 format. From these arthroscopies, a total of 459 images were extracted (121 of long biceps tendon, 41 of supraspinatus tendon, 181 of subscapularis tendon, and 116 of glenoid or humeral cartilage), and saved in png format. The images were annotated by a surgeon using the software "LabelStudio". Labeling was made using polygon annotations. The latter ensures a more precise annotation for irregular shapes. Figure 8 presents examples of shoulder arthroscopy images with their segmentation. ### Setting Data was split into \(70\%\) train set (321 images), and \(30\%\) test set (138 images). We used \(10\%\) of the training set (32 images) as labeled data to train the first model in self-training and we supposed the \(90\%\) of the training set (289 images) unlabeled data. The final evaluation was made on the test set. The images were resized to \(224\times 224\). We used an Adam optimizer and 15 number of epoch. After experiments, we set the threshold to \(0.7\). Figure 8: Visualization examples of segmentation on shoulder arthroscopy image. Each color corresponds to the classes accordingly; red: Biceps, blue: Cartilage.green: Subscapularis, yellow: Supraspinatus. Figure 7: UNet architecture[19]. #### 4.4.1 Results and discussion We use a threshold on the prediction probability to choose from the unlabeled subset, and the samples to add to the training set (improved accuracy of self-training from 76% to 80%). We varied the threshold to analyze its effect on performance. We reported the results on the test set. As shown in table 3, increasing the threshold improves the accuracy. However, for the threshold \(0.9\), the performance degrades. One reason can be that the strict selection of samples engenders the model to train on the same samples, therefore overfits. The optimal threshold value is \(0.7\) with an accuracy of \(80\%\). We also show that the class Supraspinatus, presented in yellow in figure 8 has the highest accuracy compared to other classes. We found that adding the pixel-by-pixel threshold, explained in 4.2, improved the accuracy (from \(76\%\) to \(80\%\)). We observed the effect of the ratio of the labeled data in the first iteration on the performance in figure 9. We notice that the greater percentage, the higher accuracy. We choose the ratio \(10\%\). Samples of the segmentation predictions on the test set are shown in figure 10. Conclusion In this paper, we first introduce a modular framework for Dorsal Capsulo-Scapholunate Septum classification. We have shown that decomposing the problem into a gating module and discriminative module improved the performance. Moreover, including the contribution of the gating module as weights, achieved the best results compared to the different strategies tested. Additionally, we presented a modular self-training approach for data labeling and segmentation. Our results validated the improvement of results by using self-training. However, while modular learning outperformed non-modular systems in the first problem, it was not the case for the second problem. The reason may be related to the strategy of dividing the problem. In the classification problem, we used a priori knowledge on the classification subject, contrary to the segmentation problem. In conclusion, modular learning is an efficient approach to reduce the complexity of a model and improve model generalization. However, it is crucial to manage when and where to divide a problem. In future work, a different strategy should be developed and further analyzed for modular self-training segmentation. Moreover, propose a new measure of feature selection for interpretability on convolutional neural networks.
2310.14866
A Study on Knowledge Graph Embeddings and Graph Neural Networks for Web Of Things
Graph data structures are widely used to store relational information between several entities. With data being generated worldwide on a large scale, we see a significant growth in the generation of knowledge graphs. Thing in the future is Orange's take on a knowledge graph in the domain of the Web Of Things (WoT), where the main objective of the platform is to provide a digital representation of the physical world and enable cross-domain applications to be built upon this massive and highly connected graph of things. In this context, as the knowledge graph grows in size, it is prone to have noisy and messy data. In this paper, we explore state-of-the-art knowledge graph embedding (KGE) methods to learn numerical representations of the graph entities and, subsequently, explore downstream tasks like link prediction, node classification, and triple classification. We also investigate Graph neural networks (GNN) alongside KGEs and compare their performance on the same downstream tasks. Our evaluation highlights the encouraging performance of both KGE and GNN-based methods on node classification, and the superiority of GNN approaches in the link prediction task. Overall, we show that state-of-the-art approaches are relevant in a WoT context, and this preliminary work provides insights to implement and evaluate them in this context.
Rohith Teja Mittakola, Thomas Hassan
2023-10-23T12:36:33Z
http://arxiv.org/abs/2310.14866v1
# A Study on Knowledge Graph Embeddings and Graph Neural Networks for Web Of Things ###### Abstract. Graph data structures are widely used to store relational information between several entities. With data being generated worldwide on a large scale, we see a significant growth in the generation of knowledge graphs. _Thing in the future_ is Orange's take on a knowledge graph in the domain of Web Of Things (WoT), where the main objective of the platform is to provide a digital representation of the physical world and enable cross-domain applications to be built upon this massive and highly connected graph of things. In this context, as the knowledge graph grows in size, it is prone to have noisy and messy data. In this paper, we explore state-of-the-art knowledge graph embedding (KGE) methods to learn numerical representations of the graph entities and subsequently, explore downstream tasks like link prediction, node classification, and triple classification. We also investigate Graph neural networks (GNN) alongside KGEs and compare their performance on the same downstream tasks. Our evaluation highlights the encouraging performance of both KGE and GNN-based methods on node classification, and the superiority of GNN approaches in the link prediction task. Overall we show that state-of-the-art approaches are relevant in a WoT context, and this preliminary work provides insights to implement and evaluate them in this context. Knowledge Graph Embedding, Graph Neural Networks, Web Of Things + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + Footnote †: copyrighted: none + FootnoteFootnote: copyrighted: none + FootnoteFootnote: copyrighted: none + Footnote: copyrighted none + Footnote: copyrighted: none + FootnoteFootnote: copyrighted: none + FootnoteFootnote: copyrighted: none + Footnote: copyrighted: none + FootnoteFootnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted none: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + FootnoteFootnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted: none + Footnote: copyrighted none + Footnote graph and \(\mathcal{E}\) is the set of entities of \(\mathcal{G}\) and \(\mathcal{R}\) is the set of relations of \(\mathcal{G}\). Embeddings or low dimensional vector representations can be defined as a mapping \(f:u_{i}\longrightarrow y_{i}\in\mathbb{R}^{d}\ \forall i\in|\mathcal{G}|\) where, \(y_{i}\) embedding is generated for all triples individually and \(d\) is a hyperparameter which shows the number of dimensions of the embedding. The survey papers [5; 27] explain different types of **knowledge graph embedding** methods which can be broadly grouped into 3 types: translation, factorization and neural network based methods. The rationale which separates the methods from each other is based on the type of loss function used by them and also the way they capture the knowledge graph patterns like Symmetry (reverse of a triple is also true: e.g., <X marriedTo Y>), Asymmetry (reverse of a triple is not true: e.g., <X childOf Y>) and some other properties like Inversion, Hierarchy, Composition. A good KGE method can capture all these properties as numerical representations which explains the structure and relations of the graph. In general, a knowledge graph embedding algorithm has the following parts: a scoring function (which takes triples as input and outputs a numerical value), a loss function, an optimizer and a strategy to generate negatives. Negatives or corrupted entities are the false cases of a triple (which do not belong in the graph) and they are used during the evaluation of a knowledge graph model. Translation based models use distance based functions to generate the embeddings. There are multiple ways to compute the distance functions in the euclidean space and there exist different algorithms using such ideas [27]. If we consider a triple (h,r,t), the **TransE**[3] algorithm tries to find embeddings for all entities and relations such that the combination of head and a relation vectors results in the vector of the tail i.e, \(h+r\approx t\). With this idea, the scoring function can be formulated as the distance (L1 or L2 norm) between \(h+r\) and \(t\). \[f_{r}(h,t)=-||h+r-t|| \tag{1}\] **TransR**[15] changes the way relation vectors are handled. TransE does not take into account the heterogenity of the entities and relations while TransR tries to fix it by separating entity and relation spaces. The entities (h and t) are vectorized in space \(\mathbb{R}^{d}\) and the relation is represented in a relation specific space \(\mathbb{R}^{K}\). Then the entities are projected into the relation space to calculate the scoring function. Further, more translation algorithms have been developed like TransD [12], TranSparse [13] and TransM [9] which follow the similar idea of translation with some changes in implementation. **RESCAL**[17] algorithm is based on tensor factorization as multiple relations can be expressed easily with a higher order tensor. A three way tensor \(\mathcal{X}\in\mathbb{R}^{n\times n\times m}\) (\(n\) denotes the number of entities and \(m\) denotes the number of relations), is formed where two modes are concatenated entities and the third mode holds the relations. The tensor holds value \(\mathbf{1}\) denoting the presence of a relation between the entities and \(\mathbf{0}\) if there is not any. \[\mathcal{X}_{k}\approx AR_{k}A^{T},for\ k=1,...,m \tag{2}\] where \(A\in\mathbb{R}^{n\times d}\) is a matrix which contains the embeddings of the entities. RESCAL uses rank-\(d\) factorization to learn the embeddings of \(d\) dimensions. The scoring function is defined as: \[f_{r}(h,t)=h_{T}M_{r}t \tag{3}\] where \(h,t\in\mathbb{R}^{d}\) are embeddings of entities and \(M_{r}\in\mathbb{R}^{d\times d}\) is the latent representation of relation. RESCAL is computationally expensive for larger graphs so the method DistMult [29] simplifies the complexity by restricting the \(M_{r}\) matrix to a diagonal matrix. Building on this, Holographic Embeddings (HolE) [16] and Complex Embeddings (CompEx) [24] tries to simplify the RESCAL idea and also improve it by incorporating new ideas. The neural network based methods like Semantic Matching Energy (**SME**) [2] defines an energy function which is used to assign a value to the triple by using neural networks. The intuition used here extracts the important components from different pairs and then puts them in a space where they can be compared. For instance, consider the triple (h,r,t) and firstly each entity and relation is mapped to an embedding \(E_{h}\), \(E_{r}\) and \(E_{t}\in\mathbb{R}^{d}\). Then the embeddings (\(E_{h}\) and \(E_{r}\)) are used to find new transformed embeddings by using a parameterized function such as \(g_{h}(E_{h},E_{r})=E_{hr}\). The same process is repeated for the embeddings \(E_{t}\) and \(E_{r}\) and their respective new transformed embeddings (\(E_{tr}\)) are learned. Now we define an energy function which takes these transformed embeddings as input and gives a value the triple as: _Energy, \(\mathcal{E}=f_{r}(h,t)=h(E_{lr},E_{tr})\)_. **ConvE**[7], is the first model which applied convolutional neural networks in the context of knowledge graphs. **Graph Neural Networks** use artificial deep neural networks to operate on graph data structures. The performance and expressiveness of GNNs has been impressive and extensive research is being done to improve GNN models [30]. Standard neural networks like convolutional neural networks and recurrent neural networks cannot be used directly on the graph data. GNNs use a technique called Neural Message Passing by which the information is propagated on the network based on the graph structure. The objective of a GNN is to learn the vector representation of the nodes such that it captures the neighbourhood information of each node. Given a graph \(G(V,E)\) and its initial node attributes say, \(X\), the neural message passing can be defined as: \[h_{o}^{(0)}=x_{o}\hskip 56.905512pt(\forall v\in V) \tag{4}\] \[a_{o}^{(k)}=f_{aggregate}^{(k)}(h_{u}^{k-1}|u\in\mathcal{N}(v))\hskip 42.679134pt( \forall k\in[L],v\in V) \tag{5}\] \[h_{b}^{(k)}=f_{update}^{(k)}(h_{u}^{(k-1)},a_{o}^{(k)})\hskip 42.679134pt( \forall k\in[L],v\in V) \tag{6}\] where \(f_{aggregate}^{(k)}\) and \(f_{update}^{(k)}\) are parameterized functions and \(h_{u}^{k-1}\) is the attribute of the node \(u\) during the \(k\)-th neural message passing phase. \(\mathcal{N}(v)\) is the neighbourhood of the node \(v\) and all these neighbourhood nodes are given as input to the aggregation function [21]. In equation 4, \(x_{o}\) is the node attribute and in equation \((5,6)\), \(L\) are the number of layers in GNN. If the graph does not have a node attribute then, one hot encodings of node degree or other graph properties can be used. The propagation rule of a GNN can be generalized as: \[h_{o}^{(t)}=\sum_{u\in\mathcal{N}(v)}f(x_{o},x_{o,u}^{e},x_{u},h_{u}^{t-1}) \tag{7}\] where \(h_{o}^{(t)}\) is the node representation at time \(t\) of node \(v,x_{(u,u)}^{a}\) is the edge feature vector of edge \((v,u)\) and \(h_{u}^{t-1}\) is the node representation of node \(u\) in the previous time step. The convolutional neural networks, which are popular with image processing, can be used on graphs too and these models are getting popular in recent times. We can broadly divide the Graph Convolutional Neural Networks (GCNs) models into two types: spectral-based and spatial-based graph convolutional networks. For multi-relational graph modeling, we have methods such as Relational Graph Convolutional Networks (R-GCNs) (Kipf and Welling, 2017). ## 3. Data Description ### Thing in the future graph Data used in this paper is extracted from Thing'in the future Web of Things platform3. This platform hosts a graph consisting of millions of nodes and edges. The graph is a **labeled property graph** (LPG), modeled using the NGSI-LD(Ting et al., 2017) standard 4, however as a simplification it can be considered a knowledge graph of things. The objective of such a graph is to describe cyber-physical systems, i.e. the structural and semantic properties of environments such as cities, buildings, power-plants, etc. Footnote 3: [https://tech2.thinginthefuture.com](https://tech2.thinginthefuture.com) Footnote 4: [https://en.wikipedia.org/wiki/NGSI-LD](https://en.wikipedia.org/wiki/NGSI-LD) Footnote 5: i.e. datatypeproperties and object properties, using various ontologies which are based on DRP [https://www.w3.org/TR/rfk-schema/](https://www.w3.org/TR/rfk-schema/) or OWL. [https://www.w3.org/TR/owl2-syntax/](https://www.w3.org/TR/owl2-syntax/) schemas All entities in the graph can be labeled with semantic properties6, as well as "ad-hoc" properties and relations defined by the users, following the LPG paradigm. For this reason along with implementation concerns (see section 5.2), RDF embeddings methods such as Rdf2vec(Kipf and Welling, 2017) were not used in this work, but are considered to be compared against in future work. Figure 1 shows an example piece of graph for transportation use case in a city, where nodes represent transport stations or post offices and hold properties such as id, geolocation, metadata, and edges represent physical connections, and hold information such as transport time or distance. Footnote 6: [https://www.arangodh.com/documentation/](https://www.arangodh.com/documentation/) The graph is not fully connected, but different application domains can be connected through the graph, and interoperability is encouraged by the use of ontologies. As an example, a construction company can produce a digital description of a building using the IFC format. Translating and sharing this data in a common WoT platform, a facility maintenance operator can use the WoT platform to easily access desirable information for his use. However nothing ensures that building models are consistent accross the city. For instance, to describe the type/class of the building, one can use ontology classes **[http://elite.polito.it/ontologies/dogont.owl#Building](http://elite.polito.it/ontologies/dogont.owl#Building), [https://w3id.org/bot#Building](https://w3id.org/bot#Building),** but also a user defined class from an ad-hoc schema. The same goes for any property or relationship between WoT objects. Even with ontology recommendations, nothing ensures that all application domains can be reconciled easily. For example the building model might be incompatible with nearby public transportation system, or power management. Additionally, allowing users to define their own schemas and labels inevitably leads to their proliferation, which cannot be reconciled using ontology mappings. This statement becomes bigger and bigger with scale, as more and more varied models have to inter-operate, and makes it unreasonable to query all systems at once, thus harder to build global services for use cases in logistics, industry 4.0, or smart city. That is why we aim to use machine-learning tasks (see section 4) to help reconcile models and mitigate the WoT graph inconsistencies, errors, noise, etc. **These are all critical task in WoT applications considering the number of IoT/WoT standards(Kipf and Welling, 2017) and ontologies available.** ### Sampled datasets In this preliminary work we have sampled several sub-graphs that vary in size and application domains. The graphs were collected in the _json_ format from _ArangoDB_7 and every graph takes the triples form of \(\mathcal{G}=\{(h,r,t)\}\subseteq\mathcal{E}\times\mathcal{R}\) x \(\mathcal{E}\) where every entity \(\mathcal{E}\) has a label. Footnote 7: [https://github.com/kgrl2021/submission-one](https://github.com/kgrl2021/submission-one) Table 1 shows the different datasets and their number of nodes, edges, and respective number of labels available for classification. It can be observed that the sub-graphs vary in number of entities, relations and labels.Each sub-graph belongs to a specific domain, representative of the domain heterogeneity in Thing in's LPG. One of our objectives is to detect if application domain has a significant impact on the performance of learning tasks. Incoming stages of our research include experimenting with bigger datasets, using an adequate GPU-enabled hardware architecture. For confidentiality reasons, not all datasets could be shared. Available datasets are provided at 8. The repository also include \begin{table} \begin{tabular}{|l|l|l|l|l|} \hline Dataset & Description & Nodes & Edges & Labels \\ \hline AC & Building data & 530 & 3142 & 10 \\ Adream & Building data & 778 & 3242 & 9 \\ C3 & Building data & 3581 & 8268 & 7 \\ EPFL & Building data & 67 & 140 & 6 \\ Garden & Building data & 299 & 2889 & 19 \\ Geonames & Points Of Interests & 11795 & 47158 & 2 \\ Meylan & City data & 5909 & 17704 & 38 \\ Poles & Telecom & 17506 & 35010 & 3 \\ \hline \end{tabular} \end{table} Table 1. Overview of Sub-graphs with basic information. Figure 1. Example transportation graph relevant implementation information e.g. hyperparameters for the experiments. ## 4. Overview of Graph Downstream Tasks After we learn the embeddings, there are several graph downstream tasks which can be performed. ### Node Classification In the knowledge graph, the nodes have different attributes, including a label. In the knowledge graph as a whole, there are many erroneous/missing labels. These labels can be predicted by using graph embedding techniques and machine learning methods. This problem is termed as node classification problem. There are several applications of node classification in the context of Thing'in's graph: * It can be used to predict an imprecise or missing node label. For example, Thing'in allows to model nodes with a high-level, abstract semantic label such as _owl.Thing_, or with a label defined by the user. * In Thing'in, entities can be multi-labeled, thus node classification can be used to predict additional nodes labels for existing nodes. * We can also detect outlier nodes that are labeled incorrectly. _Task Formulation_. : Since the node labels are multiple (greater than 2) in most cases (see table 1), this problem can be formulated as a multi-class classification task. The knowledge graph embeddings can be directly used as input to machine learning algorithms like Support Vector Machines (SVM) or Random Forest to build a prediction model. Alternatively we can use Graph Neural Networks to tackle the same problem. Here, the input to the GNNs could be the learning based features, like knowledge graph embeddings, or centrality based node features like degree, graph coloring number etc (Garfani et al., 2017). We also tested GNN methods like R-GCN (Zhou et al., 2018) which is then compared with the results from node classification task using KGE. ### Link Prediction It is the task of predicting missing edges or links in the graph and also predicting prospective edges (in case of a dynamic graph where the edges disappear and form based on the time-point of the graph). In the context of a knowledge graph this is often termed as knowledge graph completion. Recollecting that, a knowledge graph can be represented as a triple \(G=(\mathcal{E},\mathcal{R})=\{(h,r,t)\}\), the knowledge graphs can be incomplete by either having a missing head or a tail. Using the facts in the knowledge graph our objective is to predict the unknown entities. It boils down to guessing a tail if it is a case of tail prediction \((h,r,7)\) or guessing a head in case of a head prediction \((?,r,t)\). **Task Formulation**: This task can be formulated in multiple ways. A straight forward way is to convert the problem into a binary classification task with two classes, namely the positive and negative classes (can be denoted as \(\mathbf{1}\) for positive and \(\mathbf{0}\) for negative). A standard machine learning model can be used to solve this binary classification problem and a prediction model can be obtained. For every link, we do a hadamard product of the two entities (source and target) to generate a single numerical representation which will be used as a feature in the binary classification task. We can also formulate the task as head prediction and tail prediction. In order to learn the knowledge graph embeddings and for model selection in specific, we used this case of entity prediction (head or tail). There are two different ways in which this task can be evaluated: Rank based evaluation8 with metrics like Mean Reciprocal Rank (MRR) and Hits(\(\mathcal{BK}\), and Threshold based evaluation9 with metrics like average precision (AP) score. GNNs are used to formulate the task by splitting the edges into positive and negative edges (randomly sample negative edges). This task is then treated as a binary classification and a GNN model is built to predict the links and calculate metrics (AP). Multi-relational information of edges are used in case of R-GCN model. Footnote 8: [https://pykeen.readthedocs.io/en/latest/tutorial/understanding_evaluation.html](https://pykeen.readthedocs.io/en/latest/tutorial/understanding_evaluation.html) Footnote 9: [https://scikit-learn.org/stable/modules/generated/sklearn_metrics_average_precision_score.html](https://scikit-learn.org/stable/modules/generated/sklearn_metrics_average_precision_score.html) ### Triple Classification Here, we aim to classify if a given triple is valid or not. The embeddings that are generated should be able to score the valid triples higher than the negative triples. Negative triples (or corrupted triples) can be generated by replacing the head or tail part of the triple with some random entity. This evaluation method works by setting a global threshold \(\delta\). For instance, if a triple \((h,r,t)\) gets a score above \(\delta\) then it is classified as positive or else negative. The threshold \(\delta\) is a hyperparameter and it is adjusted according to the accuracy scores obtained on the validation set. **Task Formulation**: This task boils down to a binary classification problem and we use scores (calculated when the embeddings are learned) of each triple as features to the machine learning model. The set of negative triples are labeled as \(\mathbf{0}\) while, the positive triples are labeled as \(\mathbf{1}\). Machine Learning algorithms like SVM can then be used to handle the classification task. ## 5. Experiments and Results ### Experimental Setup The experiments were performed on 8 different graph datasets belonging to different ontologies. Experiments include: generating knowledge graph embeddings, performing graph downstream tasks Figure 2. Flow chart of different methods and tasks using both traditional machine learning methods and graph neural networks. All the experiments were performed on a 4-core CPU machine (2.1 GHz) with 16 GB RAM running Ubuntu OS and Anaconda Python 3.7 environment (GPU computation was not used). Firstly, the model selection for embedding generation was done based on the performance on link prediction task, and two types of evaluation methods were considered for this (Rank based and Threshold based evaluations). The embeddings, thus generated, were used as features in a node classification task and a machine learning model (SVM) was built to make predictions. During the embedding generation, each triple was given a score using a scoring function and these scores were used as input features for the triple classification task. Graph Neural Networks were built to handle link prediction and node classification tasks. The initial node representations for these models were tested using two centrality based features (Degree, Graph Coloring Number) and two learning based features (Knowledge Graph Embeddings, DeepWalk [(18)]). Figure 2 shows the details of all experiments and tasks which are used in this paper. ### Knowledge Graph Embeddings The knowledge graph embeddings were generated using PyKEEN10 python library and the hyperparameter tuning was done by Optuna11 library. The hyperparameters like embedding dimension, learning rate, epochs, optimizers were tuned and popular knowledge graph embedding methods like TransE, TransR, DistMult and RotatE23 were tested. The model selection was done based on the performance of embeddings on the Link Prediction task with threshold based evaluation. The embeddings generated using rank based evaluation showed relatively less performance on tasks like node classification, hence we preferred threshold based evaluation. Footnote 10: [https://pykeen.readthedocs.io](https://pykeen.readthedocs.io) Footnote 11: [https://optuna.org](https://optuna.org) #### 5.2.1. Link Prediction : Table 2 shows the results obtained for the link prediction task using knowledge graph embeddings and both the evaluation methods. The triples were split into train and test in the ratio 90:10 respectively. Test size is chosen to be 10% as we wanted all entities present in test set also to be found in the train set (For some smaller graphs, this way of splitting was not possible if we chose a larger test set size). We included the metrics for both evaluation methods and training time for threshold based method. TransR [(15)] was found to be the best model to learn knowledge graph embeddings, thus all results reported in table 2 refer to TransR results. Figure 3 shows the embedding visualizations where each point in the figure represents an entity and the colour represents its class/ label. The visualization was made using t-SNE [(25)] which is a dimensionality reduction technique generally used to visualize high dimensional data. Note: Since, we chose threshold based evaluation here, even for link prediction with GNNs, we used the average precision score to report the performance. Rank based metrics in table 2 are just for reference. #### 5.2.2. Node Classification : Using the knowledge graph embeddings, we handled the task of node classification with the help of machine learning algorithms. Table 3 shows the accuracy and F1 scores (in %) obtained for the node classification task. We used Support Vector Machine classifier and also Random Forest to perform the multi-class classification task. The test size was chosen to be 20% of the entire data and hyperparameters were tuned using 10-fold cross validation on the train data. #### 5.2.3. Triple Classification : The experiments for triple classification were performed in two cases. Case 1: equal number of negative triples were generated to the existing positive triples and in case 2: twice the number of negative triples were generated. This was Figure 3. t-SNE plot of embeddings of different sub-graphs (from top to bottom: _AC_ building data, _Geonames_ geolocation data, _Meylan_ city data, _C3_ building data) done to test the robustness of the machine learning algorithm in handling the imbalance of positive and negative instances. In both cases, the test size was chosen to be 20% of the entire data and hyperparameters were tuned using 10-fold cross validation on the train data. We used Logistic Regression and Support Vector Machine classifiers for this task and the scores from best model were shown in the table 4 (we found the best model to be SVM). #### 5.2.4. Discussion : The results from node classification (see table 2) task and also the triple classification (see table 3) task show that the knowledge graph embeddings were of good quality. Most of the datasets show an accuracy and F1 score above 80% in the node classification task, and above 90% in the triple classification task. The embeddings have captured the network structure and its properties which resulted in good performance. The poor performance on some datasets is presumably due to the fact that the graphs are too small in size (e.g. _EPFL_), which makes it difficult to learn good quality embeddings, or they have highly imbalanced class distribution (e.g. _Garden_, see table 1). ### Graph Neural Networks Initial numerical representations for every node in the graph are important to build a graph neural network model. Since we did not have any useful representations for our graph data, we experimented with different representation methods: Graph centrality based methods, i.e. In-degree and Coloring number, and Learning based methods, i.e. Knowledge graph embeddings, DeepWalk (He et al., 2017), and also, random embeddings. In total, we ran the experiments using 5 different node representation methods. The different GNN models we used in experiments include: GAT (Krizhevsky et al., 2014), GCN (He et al., 2017), ChebNet (Chen et al., 2016), GCN (He et al., 2017), GCN-ARMA (Chen et al., 2016), SGC (Kipf and Welling, 2017), GraphSAGE (Kipf and Welling, 2017) and R-GCN (Kipf and Welling, 2017). For each graph downstream task, different GNN models were tested and the factors behind picking the models include: general performance of individual algorithm, diversity of the set, and ease of implementation. Adding to this, we tested R-GCN (Kipf and Welling, 2017) to use the multi-relational aspect of the graph which the other GNN methods do not account for. Because R-GCN generates its own embeddings the results were shown separately from other GNN methods. We wanted to perform graph analysis using GNNs with and without considering the multi-relational aspect of the knowledge graph, so we have two sets of analysis. #### 5.3.1. Link Prediction : Here, the problem was formulated as a binary classification12 task and the results using GNN models are displayed in table 5 and the results using machine learning models like logistic regression and SVM are shown in table 6. Footnote 12: [https://docs.dgl.ai/en/0.6x/new-tutorial/4_link_predict.html](https://docs.dgl.ai/en/0.6x/new-tutorial/4_link_predict.html) The GNN models were built using Deep Graph Library13 (DGL) and out of all models we tested GCN and ChebNet performed better. The positive edges were split into 80% train and 20% test sets and equal number of negative edges were generated for each set. Footnote 13: [https://www.dgl.ai](https://www.dgl.ai) Note: The best performing model and also the initial node representations were highlighted in the table for each graph data (see table 5). To model with the multi-relational aspect, we used R-GCN and tackled the same link prediction task. The results of R-GCN are shown in table 6 (results for _garden_ dataset are not available). \begin{table} \begin{tabular}{|l|l|l|l|l|l|} \hline \multirow{2}{*}{Dataset} & \multicolumn{2}{l|}{Rank-based} & \multicolumn{2}{l|}{Threshold-based} \\ \cline{2-6} & MRR & Hits@1 & Hits@10 & AP score & Time(Min) \\ \hline AC & 47.9 & 35.5 & 70.0 & 79.7 & 66.4 \\ Adream & 41.1 & 31.0 & 57.9 & 81.4 & 2.3 \\ C3 & 18.7 & 10.7 & 35.6 & 79.3 & 37.1 \\ EPFL & 55.3 & 50.0 & 100 & 97.6 & 1.1 \\ Garden & 43.1 & 17.5 & 91.6 & 61.9 & 2.5 \\ Geonames & 81.3 & 74.3 & 95.1 & 93.1 & 21.7 \\ Meylan & 27.7 & 21.7 & 36.8 & 81.4 & 33.6 \\ Poles & 54.8 & 52.4 & 57.9 & 100 & 102.2 \\ \hline \end{tabular} \end{table} Table 2. Results of link prediction task using TransR method (Rank based metrics were included only as reference) \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|} \hline \multirow{2}{*}{Dataset} & \multicolumn{2}{l|}{SVM} & \multicolumn{2}{l|}{Random Forest} & \multicolumn{2}{l|}{R-GCN} \\ \cline{2-7} & Accuracy & F1 Score & Accuracy & F1 Score & Accuracy & F1 Score \\ \hline AC & 91.5 & 88.9 & 87.7 & 85.0 & 39.6 & 52.0 \\ Adream & 73.1 & 67.8 & 68.6 & 60.9 & 33.3 & 39.2 \\ C3 & 85.2 & 82.0 & 82.6 & 78.2 & 30.6 & 28.1 \\ EPFL & 78.6 & 69.1 & 78.6 & 69.1 & 57.1 & 62.3 \\ Garden & 61.7 & 56.3 & 70.0 & 64.7 & 33.3 & 46.6 \\ Geonames & 99.6 & 99.6 & 97.6 & 97.6 & 65.4 & 74.6 \\ Meylan & 89.6 & 88.7 & 85.6 & 84.0 & 50.9 & 60.9 \\ Poles & 100 & 100 & 100 & 100 & 61.4 & 62.1 \\ \hline \end{tabular} \end{table} Table 3. Results of node classification task using KGE methods and R-GCN #### 5.3.2. Node Classification : Here, we used Pytorch Geometric14 (PyG) to build the GNN models. The table 7 shows the results with the 2 best performing GNN models and different initial node representations. The data was split into train, validation and test sets (in ratio 80:10:10). The hyper-parameter tuning was done with the help of Optuna python library using validation score and the final metrics were calculated based on the predictions on the test set. F1 score was chosen as metric since most of the graph datasets have highly imbalanced classes. To model the same problem using multi-relational information of edges, we used R-GCN with the same data and table 3 shows the results. Footnote 14: [https://pytorch-geometric.readthedocs.io](https://pytorch-geometric.readthedocs.io) #### 5.3.3. Discussion : Both DGL and PyG implement several graph neural network models. PyG focuses more on node classification and included many examples for it. Hence, PyG was used in node classification and DGL for link prediction. In the link prediction task, we can see that GNN models (also, R-GCN) are superior to standard machine learning models as all graph datasets (except one, i.e, _Poles_ data) show better performance with GNN models (see tables 5 and 6). We can infer that GNN's message passing technique was able to capture the network information well and majority of the datasets perform better when the initial node representations are learning based (for link prediction this observation is not well pronounced but for node classification, learning based features show better performance by a huge margin). This can be due to \begin{table} \begin{tabular}{|l|l|l|l|} \hline Dataset & Logistic Regression & SVM & R-GCN \\ \hline AC & 52.2 & 80.7 & 90.4 \\ Adream & 56.9 & 89.2 & 92.1 \\ C3 & 68.9 & 96.2 & 97.4 \\ EPFL & 93.0 & 96.4 & 100 \\ Garden & 62.2 & 74.0 & NA \\ Geonames & 81.6 & 93.9 & 96.7 \\ Meylan & 59.9 & 95.7 & 97.0 \\ Poles & 68.6 & 97.1 & 87.4 \\ \hline \end{tabular} \end{table} Table 6. Average precision scores for different datasets in link prediction task using standard machine learning methods and R-GCN \begin{table} \begin{tabular}{|l|l|l|l|l|l|} \hline \multirow{2}{*}{Dataset} & \multirow{2}{*}{Models} & \multicolumn{3}{c|}{Initial Node Representations} \\ \cline{3-5} & & In-degree & Coloring & KG Embed- & DeepWalk & Random \\ & & Number & ding & & values \\ \hline \multirow{2}{*}{AC} & GCN & 95.6 & 88.0 & 95.9 & **97.0** & 94.1 \\ & ChebNet & 92.2 & 88.6 & 90.2 & 91.2 & 87.2 \\ \hline \multirow{2}{*}{Adream} & GCN & 94.2 & 89.1 & 95.8 & 93.9 & 93.1 \\ & ChebNet & 94.4 & 91.0 & 93.9 & 94.6 & 89.9 \\ \hline \multirow{2}{*}{C3} & GCN & 97.3 & 96.5 & 97.4 & 97.8 & 96.3 \\ & ChebNet & 96.4 & 96.5 & 98.0 & **98.2** & 95.7 \\ \hline \multirow{2}{*}{EPFL} & GCN & 98.3 & 98.3 & 89.8 & 98.9 & 94.2 \\ & ChebNet & **100** & 99.9 & 98.9 & **100** & 99.9 \\ \hline \multirow{2}{*}{Garden} & GCN & 88.4 & 77.7 & **93.1** & 91.3 & 90.9 \\ & ChebNet & 91.6 & 78.8 & 90.5 & 87.5 & 88.8 \\ \hline \multirow{2}{*}{Geonames} & GCN & 97.6 & 98.5 & 97.7 & 97.9 & 97.5 \\ & ChebNet & 95.2 & 94.8 & 97.1 & 93.4 & 97.6 \\ \hline \multirow{2}{*}{Meylan} & GCN & 96.2 & 61.7 & 98.0 & 96.1 & 95.2 \\ & ChebNet & 96.0 & 96.3 & 98.2 & **98.3** & 95.8 \\ \hline \multirow{2}{*}{Poles} & GCN & 82.7 & 76.0 & 84.9 & 83.3 & 81.1 \\ & ChebNet & 80.9 & 81.3 & 83.6 & 84.3 & 81.7 \\ \hline \end{tabular} \end{table} Table 4. Results for triple classification task (using SVM) \begin{table} \begin{tabular}{|l|l|l|l|} \hline Dataset & Logistic Regression & SVM & R-GCN \\ \hline AC & 52.2 & 80.7 & 90.4 \\ Adream & 56.9 & 89.2 & 92.1 \\ C3 & 68.9 & 96.2 & 97.4 \\ EPFL & 93.0 & 96.4 & 100 \\ Garden & 62.2 & 74.0 & NA \\ Geonames & 81.6 & 93.9 & 96.7 \\ Meylan & 59.9 & 95.7 & 97.0 \\ Poles & 68.6 & 97.1 & 87.4 \\ \hline \end{tabular} \end{table} Table 6. Average precision scores for different datasets in link prediction task using standard machine learning methods and R-GCN the fact that message passing further improves the quality of the learning based features (which have already captured some network properties). R-GCN was able to use the additional information of edge types and performed well on the link prediction task. However in the node classification task, R-GCN does not perform as well (see tables 6 and 3). ## 6. Conclusion In this paper, we studied the application of knowledge graph embeddings and graph neural networks in the context of a Web Of Things platform, _Thing in the future_. We have reviewed, selected and implemented relevant approaches that could respond to the objective of validating existing knowledge as well as inferring new knowledge in this type of knowledge graph. We built and evaluated different graph downstream tasks like node classification, link prediction and triple classification and all these tasks can be used to build several leaf life use-cases. The experiments show that graph neural networks can further enhance the performance notably for link prediction. One observation about the heterogeneous nature of the knowledge graph and our experiments is that by segmenting experiments for each sub-graph, some types of graphs (e.g. building data) tend to be smaller in size. This impacts the performance of algorithms negatively, as they require more data to train a good model, thus the model selection could indirectly depend on the nature of the dataset. The possibility of generalizing our results depend on the future research and experiments on a wide, diverse range of sub-graphs. Until then, we argue our current research stand as a good precursor to future investigations. We believe these preliminary results are encouraging and provide a good insight on building predictive machine learning models on data relating to WoT domain and help building some use-cases for information retrieval, but we need to investigate further with bigger datasets, and by combining datasets from different domains. ## Acknowledgement We would like to acknowledge that the author, Rohith Teja Mittakola, was affiliated with Orange Labs, Cesson-Sevigne during the research for this paper.
2302.08090
QTrojan: A Circuit Backdoor Against Quantum Neural Networks
We propose a circuit-level backdoor attack, \textit{QTrojan}, against Quantum Neural Networks (QNNs) in this paper. QTrojan is implemented by few quantum gates inserted into the variational quantum circuit of the victim QNN. QTrojan is much stealthier than a prior Data-Poisoning-based Backdoor Attack (DPBA), since it does not embed any trigger in the inputs of the victim QNN or require the access to original training datasets. Compared to a DPBA, QTrojan improves the clean data accuracy by 21\% and the attack success rate by 19.9\%.
Cheng Chu, Lei Jiang, Martin Swany, Fan Chen
2023-02-16T05:06:10Z
http://arxiv.org/abs/2302.08090v1
# Qtrojan: A circuit backdoor against quantum neural networks ###### Abstract We propose a circuit-level backdoor attack, _Qtrojan_, against Quantum Neural Networks (QNNs) in this paper. Qtrojan is implemented by few quantum gates inserted into the variational quantum circuit of the victim QNN. Qtrojan is much stealthier than a prior Data-Poisoning-based Backdoor Attack (DPBA), since it does not embed any trigger in the inputs of the victim QNN or require the access to original training datasets. Compared to a DPBA, Qtrojan improves the clean data accuracy by 21% and the attack success rate by 19.9%. Cheng Chu Lei Jiang Martin Swany Fan Chen +Indiana University Quantum Neural Network, Variational Quantum Circuit, Quantum Backdoor, Backdoor Attack ## 1 Introduction Quantum Neural Networks (QNNs) shine in solving a wide variety of problems including object recognition [1, 2], natural language processing [3, 4], and financial analysis [5]. The success of QNNs motivates adversaries to transplant malicious attacks from classical neural networks to QNNs. _Backdoor attack_ is one of the most dangerous malwares abusing classical neural networks [6, 7]. In a backdoor attack, a backdoor is injected into the network model, such that the model behaves normally when the backdoor is disabled, yet induces a predefined behavior when the backdoor is activated. Although conventional Data-Poisoning-based Backdoor Attacks (DPBAs) [6, 7] are designed for classical neural networks, it is difficult to perform a DPBA against QNNs. First, a typical DPBA [6] embeds a nontrivial-size trigger (e.g., \(3\%\sim 4\%\) of the input size) into the inputs of a victim classical neural network. However, the input dimension of state-of-the-art QNNs [2, 3, 4, 5, 8] is small (e.g., 4\(\sim\)16 qubits). Embedding even a 1-qubit trigger into the inputs of a victim QNN makes DPBAs less stealthy. Second, a DPBA has to access the original training dataset, attach a trigger to some data samples in the dataset, and train the victim QNN to learn a predefined behavior. But the original training dataset and a long training process may not be available in real-world attacks. Third, after the backdoor of a DPBA is implanted, the DPBA cannot work if the victim QNN is retrained with the users' new clean datasets. The new clean datasets force the victim QNN to forget the predefined behavior. Fourth, a DPBA can achieve two conflicting goals, high clean data accuracy (i.e., accuracy when the backdoor is disabled) and high attack success rate (prediction ratio to the target class when the backdoor is activated) simultaneously on a classical neural network [6]. Unfortunately, we find a DPBA obtains either high clean data accuracy or high attack success rate, but not both, on a QNN, due to its shallow network architecture on a Noisy Intermediate-Scale Quantum (NISQ) computer. To achieve high accuracy, recent work [9, 10] designs QNN circuits (aka, ansatzes) by automated searches such as deep reinforcement learning. Unfortunately, most auto-designed QNN circuits are inscrutable, since they contain sophisticated quantum circuit components which are often hard for humans to inspect. Even randomly-wired quantum gates [10] can obtain competitive accuracy on standard QNN benchmarks. This provides attackers an opportunity to insert malicious circuit-level backdoors. However, no prior work considers a circuit backdoor against QNNs. In this paper, we propose a circuit-level backdoor attack, _Qtrojan_. Qtrojan adds few quantum gates as the backdoor around the encoding layer of a victim QNN. Qtrojan uses several lines in a server-specific configuration file as the trigger. When Qtrojan is disabled, the victim QNN achieves the same accuracy as its clean counterpart. However, after Qtrojan is enabled, the victim QNN always predicts a predefined target class, regardless of the inputs. Compared to a prior DPBA, Qtrojan improves the clean data accuracy by 21% and the attack success rate by 19.9%. ## 2 Background ### Quantum Cloud Computing Due to the high cost of NISQ computers, average users typically run QNNs via quantum cloud services, as shown in Figure 1. A user designs a QNN circuit, trains it, compiles the trained circuit and input data into quantum analog pulses, and sends the pulse sequence to a cloud NISQ server. The server applies the pulse sequence to qubits, and returns the result to the user. A prediction result is a probability vector, where the predicted class is computed by _softmax_. ### Variational Quantum Circuit In a classical neural network [6], the first multiple layers generate an embedding for an input, e.g., a sentence or an image, while the last layer maps the embedding to a probability vector. On the contrary, in a QNN [3, 4, 8], these functions
2307.09550
The semantic landscape paradigm for neural networks
Deep neural networks exhibit a fascinating spectrum of phenomena ranging from predictable scaling laws to the unpredictable emergence of new capabilities as a function of training time, dataset size and network size. Analysis of these phenomena has revealed the existence of concepts and algorithms encoded within the learned representations of these networks. While significant strides have been made in explaining observed phenomena separately, a unified framework for understanding, dissecting, and predicting the performance of neural networks is lacking. Here, we introduce the semantic landscape paradigm, a conceptual and mathematical framework that describes the training dynamics of neural networks as trajectories on a graph whose nodes correspond to emergent algorithms that are instrinsic to the learned representations of the networks. This abstraction enables us to describe a wide range of neural network phenomena in terms of well studied problems in statistical physics. Specifically, we show that grokking and emergence with scale are associated with percolation phenomena, and neural scaling laws are explainable in terms of the statistics of random walks on graphs. Finally, we discuss how the semantic landscape paradigm complements existing theoretical and practical approaches aimed at understanding and interpreting deep neural networks.
Shreyas Gokhale
2023-07-18T18:48:54Z
http://arxiv.org/abs/2307.09550v1
# The semantic landscape paradigm for neural networks ###### Abstract Deep neural networks exhibit a fascinating spectrum of phenomena ranging from predictable scaling laws to the unpredictable emergence of new capabilities as a function of training time, dataset size and network size. Analysis of these phenomena has revealed the existence of concepts and algorithms encoded within the learned representations of these networks. While significant strides have been made in explaining observed phenomena separately, a unified framework for understanding, dissecting, and predicting the performance of neural networks is lacking. Here, we introduce the semantic landscape paradigm, a conceptual and mathematical framework that describes the training dynamics of neural networks as trajectories on a graph whose nodes correspond to emergent algorithms that are instrinsic to the learned representations of the networks. This abstraction enables us to describe a wide range of neural network phenomena in terms of well studied problems in statistical physics. Specifically, we show that grokking and emergence with scale are associated with percolation phenomena, and neural scaling laws are explainable in terms of the statistics of random walks on graphs. Finally, we discuss how the semantic landscape paradigm complements existing theoretical and practical approaches aimed at understanding and interpreting deep neural networks. Artificial Intelligence Machine Learning Statistical Physics ## 1 Introduction We live in an age in which the readers of this manuscript can never quite be sure whether it was written by a human or a large language model1. This sobering fact at once highlights the stunning technological advances in artificial intelligence (AI) in recent years, as well as the urgent need to understand the inner workings of the deep neural networks that made these advances possible. Understanding neural networks requires us to know _what_ the networks are capable of doing, _how_ their performance depends on various factors, and finally _why_ they succeed or fail in different circumstances. Over the last few years, researchers have begun to answer these questions through an impressive set of experimental results as well as theoretical ideas. Several studies have reported empirical observations of power law scaling of the test loss as a function of network size (number of parameters), dataset size (number of training examples), compute budget as well as training time (Hestness et al. (2017), Rosenfeld et al. (2019), Kaplan et al. (2020), Gordon et al. (2021), Zhai et al. (2022), Hoffmann et al. (2022)). Subsequently, a number of distinct theoretical proposals have been put forward to explain the observed power laws (Bahri et al. (2021), Hutter (2021), Sharma and Kaplan (2022), Maloney et al. (2022), Michaud et al. (2023)). Footnote 1: For the record, this manuscript has been written in its entirety by a human named Shreyas Gokhale. In addition to these ubiquitous power laws, which provide a precise account of the improvement of model performance with scale, a number of studies have also reported the sudden emergence of new capabilities in neural networks. New capabilities can emerge either as a function of time, a phenomenon known as grokking (Power et al. (2022), Thilak et al. (2022), Liu et al. (2022)), or as a function of the network size, which we will refer to in this paper as 'emergence with scale' (Wei et al. (2022), Ganguli et al. (2022)). Despite their prevalence, the nature of emergent phenomena in deep neural networks remains far from understood. While some works support the view that grokking is associated with a phase transition (Zunkovic and Ilievski (2022); Liu et al. (2022b)), others have contested that sudden emergence is only an illusion stemming from the metric used for quantifying loss, and that the network's performance is in fact evolving gradually (Schaeffer et al. (2023)). Furthermore, networks can make hidden progress before the clear manifestation of emergence, that remains undetected in loss curves (Barak et al. (2022)). A promising approach towards disentangling these seemingly contradictory observations about the emergence is to look under the hood of the network, and determine what the networks in question are actually learning to do. This line of inquiry has led to some truly remarkable discoveries on the nature of learned representations in neural networks. Using network dissection (Bau et al. (2017)), Bau et al. (2020) showed that individual neurons in convolutional neural networks (CNNs) can represent concepts associated with physical objects. By reverse engineering neural networks (Cammarata et al. (2020)), a number of studies have shown that transformer language models can learn a variety of algorithms for tasks including text prediction (Ethage et al. (2021); Olsson et al. (2022)), modular addition (Nanda et al. (2023); Zhong et al. (2023)), and group operations (Chughtai et al. (2023)). Collectively, these findings establish the fact that deep neural networks are capable of learning well-defined computations that confer enhanced prediction capabilities. Based on this idea, (Michaud et al. (2023)) have recently proposed the quantization model to explain scaling as well as emergence in language models. Owing to the preponderance of performance metric data and the growing literature on mechanistic insights, several independent theoretical approaches aimed at explaining various aspects of neural network phenomenology have been proposed, as discussed in preceding paragraphs. While all of these approaches are immensely valuable, they follow from distinct conceptual frameworks. This makes it difficult to understand how different approaches are related to each other, and if and when their domains of applicability intersect. A unified conceptual framework that integrates quantitative neural network performance metrics with the qualitative demands of interpretability is highly desirable, as it will provide a common language for theoretical discourse, and direct further developments in the field. In the present work, we introduce such a unified conceptual framework by defining a semantic landscape in the space spanned by internal algorithms learned by neural networks, which we term heuristic models. We then proceed to demonstrate how several distinct qualitative and quantitative aspects of neural network phenomenology find a natural and consistent description in terms of trajectories on the semantic landscape. The rest of the paper is organized as follows: In Section \(2\), we provide mathematical definitions for key concepts associated with the semantic landscape. In Section \(3\), we illustrate how grokking and emergence with scale can be described in terms of first passage and percolation phenomena on the semantic landscape. In Section \(4\), we derive scaling laws for test loss as a function of network size, dataset size, and training time, for a particular choice of semantic landscape topography. Finally, in Section \(5\), we discuss how the semantic landscape paradigm connects to existing approaches, how it can be developed further, and what its implications are for future research. ## 2 The semantic landscape paradigm ### From the loss landscape to the semantic landscape At its core, neural network learning is an optimization problem with many interacting degrees of freedom. This fact has long attracted the attention of statistical physicists, as it enables them to apply their formidable repertoire of theoretical tools to a challenging problem of tremendous practical importance Mehta et al. (2019). In particular, the concept of a loss landscape defined by the network loss as a function of neural weights plays a central role in our understanding of neural networks, owing to its similarities with well-studied objects such as energy landscapes in physics (Debenedetti and Stillinger (2001)) and chemistry (Onuchic et al. (1997)), and fitness landscapes in evolutionary biology (De Visser and Krug (2014)). In particular, concepts such as scaling and the renormalization group (Geiger et al. (2020); Bahri et al. (2021); Roberts et al. (2022)), glasses (Baity-Jesi et al. (2018)), and jamming (Spigler et al. (2019)), have provided significant insights into the nature of training dynamics. The loss landscape is immensely useful, as it enables us to understand learning dynamics by analogy with well-understood physical phenomena that we have much better intuition for. However, recent studies on mechanistic interpretability (Nanda and Lieberum (2022); Zhong et al. (2023)), have brought to light a significant gap in our understanding of how neural networks function. Specifically, we do not yet understand how dynamics on the loss landscape can produce outputs that look as if they follow from a sequence of interpretable logical steps. To bridge this gap, we need to develop a coarse-grained description that can on one hand retain connections with loss landscapes, and on the other hand, take direct cognizance of the emergent algorithms that are being discovered in a growing number of studies. The purpose of this paper is to introduce such a description in terms of the semantic landscape, and demonstrate that it can consistently explain a number of disparate neural network phenomena within the same conceptual framework. ### Heuristic models Understanding how neural networks function is synonymous with understanding how information processing occurs between their input and output layers. Mechanistic interpretability studies strongly suggest that this information processing is analogous to the execution of emergent algorithms encoded within the learned representations of neural networks. The semantic landscape paradigm formalizes this notion with the help of the following conjecture For every neural network, there exists at least one function that maps subsets of the set of network parameters to well formed formulas within a formal language. A formal language \(L\) is defined as a set of words formed using symbols from an alphabet \(\Sigma\), according to a given set of rules. A well-formed formula \(f\) within \(L\) is defined as a finite sequence of symbols belonging to \(\Sigma\), that has been constructed using the rules of \(L\). Thus, the formal language \(L\) can be identified with the set of all well-formed formulas in \(L\). We also note that the 'formal language' discussed here is associated with internal representations of the neural network, and is not related in any way to the programming language in which the neural network code is written. Mathematically, the conjecture above can be expressed as follows: **C1**: Given a neural network with a set of parameters \(W=\{w_{i}|i\in\mathbb{N}^{+},i\leq N\}\), and a formal language \(L\) containing the set of well formed formulas \(\mathcal{S}_{F}=\{f_{i}|i\in\mathbb{N}^{+},i\leq N_{L}\}\), \(\exists\mathcal{G}\neq\emptyset\), such that \(\forall G_{i}\in\mathcal{G}\), \(G_{i}:W^{m}\mapsto\mathcal{S}_{F}\) for some \(m\in\mathbb{N}^{+}\). Why would such functions \(G_{i}\) that map neural weights to well-formed formulas be of interest? The reason is that it is always possible to uniquely encode well-formed formulas in any language as numbers using schemes such as Godel numbering (Godel (1931)). The functions \(G_{i}\) would therefore ensure a mathematical equivalence between the state of the neural network as described by a set of neural weights, and the state of the neural network as described by a set of well-formed formulas in a formal language. Since algorithms are by definition, sets of well-formed formulas within a given formal language, **C1** implies that the collective neural activity within all the hidden layers for a fixed set of neural weights is mathematically equivalent to the execution of an algorithm in some formal language \(L\). The conjecture **C1** is strongly supported by empirical evidence across various models and datasets in the form of concept neurons (Bau et al. (2020)), induction heads (Elhage et al. (2021)), learned algorithms for mathematical operations (Nanda et al. (2023), Chughtai et al. (2023), Zhong et al. (2023)), and auto-discovered quanta (Michaud et al. (2023)). Indeed, these studies have also shown that beyond the formulation of logical propositions, neural networks are capable of integrating these propositions into emergent heuristic algorithms that enable the network to generate correct outputs for a majority of inputs. We use the term 'heuristic' to emphasize the fact that the emergence of a logical procedure for determining the output need not imply an innate knowledge or understanding of underlying mathematical or factual truths on the network's part. Within the semantic landscape paradigm, we formalize the notion of emergence of such internal heuristic algorithms by defining the concept of a heuristic model as follows: **Def. 1**: A heuristic model \(\mathcal{M}_{H}\) is an algorithm in \(L\) composed of a finite sequence of well formed formulas \(f_{i}\in\mathcal{S}_{F}\), that generates a unique conditional probability distribution \(\mathcal{P}_{H}(y|x)\) over all possible outputs \(y\), for every possible input \(x\). We refer to output distributions rather than unique outputs in **Def 1**, as for some tasks such as natural language prediction, there may not be a unique 'correct' answer, and the range of appropriate responses will typically derive from some distribution over the alphabet of that natural language. The hierarchy of steps from weights to well formed formulas to heuristic models detailed above, is shown schematically in Fig. 1a. Several comments about the notion of heuristic models are in order. First, while heuristic models are collections of well formed formulas in \(L\), this does not necessarily imply that they will make correct predictions for inputs, or result in low test loss. In fact, in general, most heuristic models are likely to be _bad_ algorithms that result in _high_ test loss. As training progresses, the network keeps improving its performance by discovering increasingly better heuristic models. A successfully trained network is one that has "found" a heuristic model that is capable of generalizing over a large fraction of possible test data. For instance, in the example of modular addition studied by Nanda et al. (2023), the network initially uses a bad heuristic model that only memorizes training data, but later finds a clever mathematical algorithm that can generalize well across all inputs. Another important point to note is that a network of a fixed size will in general not be able to formulate all possible heuristic models. This follows directly from **C1**, because as we increase the number of parameters \(N\), the number of ways in which weights can be combined into well formed formulas also increases. In colloquial terms, increasing the network size increases the "vocabulary" of the network. We will see in Section 3 that this increased capacity for formulating algorithms with network size is crucial for the emergence of new capabilities with scale. ### Fallibility and the semantic landscape Effectively, the foregoing discussion implies that the training dynamics of a neural network can be viewed as dynamics on a graph whose vertices are heuristic models. At a given point in training, the network occupies a single node on the graph of heuristic models. As training progresses and neural weights are adjusted, the network can adopt a different heuristic model, which corresponds to a transition from one node to another. The transition rates to move from one heuristic model to another are governed by a number of factors including the network size and dataset size, data quality, and hyperparameters associated with the network architecture and training algorithm. In principle, one can adopt a bottom-up approach to compute these transition rates by systematically coarse-graining over the network parameters, for specific choices of network architecture and training algorithm. At present, however, such a task is prohibitively difficult to accomplish. Instead, in this introductory version of the semantic landscape paradigm, we implement a top-down approach by making simplified choices for transition rates that are motivated by, and consistent with, network performance measurements. In order to make such choices, we need to quantify precisely how _good_ or _bad_ a given heuristic model is. We do so by defining the fallibility \(\mathcal{F}\) of a given heuristic model \(\mathcal{M}_{H}\) as \[\mathcal{F}(\mathcal{M}_{H})=\sum_{x\in\mathcal{I}}D_{KL}\big{(}\mathcal{P}_{ GT}(y|x)\parallel\mathcal{P}_{H}(y|x)\big{)} \tag{1}\] where \(\mathcal{P}_{GT}(y|x)\) is the ground truth probability distribution over all possible outputs \(y\) for input \(x\in\mathcal{I}\), and \(D_{KL}(P\parallel Q)\) is the Kullback-Leibler divergence of \(P\) from \(Q\). From this definition, it is clear that \(\mathcal{F}(\mathcal{M}_{H})\geq 0\)\(\forall\mathcal{M}_{H}\), and \(\mathcal{F}(\mathcal{M}_{H})=0\) if and only if the output distribution generated by \(\mathcal{M}_{H}\) is indistinguishable from the ground truth for every \(x\in\mathcal{I}\). Thus, the lower the value of \(\mathcal{F}\), the better the heuristic model is. It is important to note that the fallibility of a model is defined over _all possible_ inputs, and not just those in the training and test datasets. It is therefore distinct from the measured loss. However, if the training data quality is good, measured test loss will be lower for models of low fallibility compared to those of high fallibility. This can be seen explicitly by considering the set of all possible inputs \(\mathcal{I}\) as a union of the training dataset \(\mathcal{I}_{\mathrm{train}}\), test dataset \(\mathcal{I}_{\mathrm{test}}\) and the remaining unseen data \(\mathcal{I}_{\mathrm{unseen}}\). Here, \(\mathcal{I}_{\mathrm{unseen}}\) refers to the set of inputs that the neural network has not seen either during training or during testing. \[\mathcal{F}=\sum_{x\in\mathcal{I}_{\mathrm{train}}}\sum_{x\in\mathcal{I}_{ \mathrm{test}}}\sum_{x\in\mathcal{I}_{\mathrm{unseen}}}D_{KL}\big{(}\mathcal{ P}_{GT}(y|x)\parallel\mathcal{P}_{H}(y|x)\big{)}=\mathcal{F}_{\mathrm{train}}+ \mathcal{F}_{\mathrm{test}}+\mathcal{F}_{\mathrm{unseen}} \tag{2}\] Thus, if the network has simply memorized the training dataset, it will have low \(\mathcal{F}_{\mathrm{train}}\), and hence, low measured training loss, but will generally have high \(\mathcal{F}_{\mathrm{test}}\), and therefore high measured test loss. By contrast, a network that has successfully learned to generalize over all inputs will yield low values for \(\mathcal{F}_{\mathrm{train}}\), \(\mathcal{F}_{\mathrm{test}}\), as well as \(\mathcal{F}_{\mathrm{unseen}}\), and will therefore have low test loss. Defined in this manner, fallibility of a heuristic model is analogous to the energy of Figure 1: **Constructing the semantic landscape:(a) Schematic illustration of the coarse-graining from neural weights to well formed formulas, and their subsequent combination into heuristic models, as proposed in conjecture 1. (b) Schematic of the semantic landscape defined by the variation of fallibility \(\mathcal{F}\) over heuristic models in proposition space. At a specific point in training, neural weights attain specific values, and the network occupies a single node on the graph of heuristic models. At a later time, the neural weights change, enabling the network to occupy a different node on the graph of heuristic models.** an excited state, measured with respect to the energy of the ground state. In analogy with energy landscapes defined over the space of configurations for glasses (Debenedetti and Stillinger (2001)) and proteins (Onuchic et al. (1997)), or fitness landscapes defined over genomes (De Visser and Krug (2014)), we can define the semantic landscape of neural networks in terms of the variation of fallibility \(\mathcal{F}\) in the space of well formed formulas \(f_{i}\in\mathcal{S}_{F}\), such that each "configuration" \(\{f_{i}\}\) corresponds to a particular heuristic model \(\mathcal{M}_{H}\) (Fig. 1b). An immediate corollary of this definition is that the neural network can potentially get trapped in local minima of the semantic landscape, which prevent the network from improving its performance. This situation implies that small changes to the heuristic model adopted by the network will necessarily lead to worse predictions, and therefore higher loss. In order to make accurate predictions over a majority of inputs, such a network would have to make large changes to its heuristic model. Thus, improvement in performance is only possible if the network is able to cross certain _semantic barriers_. The notion of semantic barriers motivates a natural choice for transition rates, analogous to thermally activated hopping across an energy barrier. We therefore assume that the rate to transition from model \(\mathcal{M}_{i}\) to \(\mathcal{M}_{j}\) assumes an Arrhenius-like form \(R_{ij}=R_{ij}^{0}\mathrm{exp}\left[-\beta(\mathcal{F}_{j}-\mathcal{F}_{i})\right]\). These transition rates in turn imply an Arrhenius-like law for the waiting time \(\tau_{ij}\) for the transition from \(\mathcal{M}_{i}\) to \(\mathcal{M}_{j}\), which is given by \[\tau_{ij}=\tau_{ij}^{0}e^{\beta(\mathcal{F}_{j}-\mathcal{F}_{i})} \tag{3}\] The constants \(\beta\) and \(\tau_{ij}^{0}\) can depend on the dataset size \(D\), number of parameters \(N\), as well as hyperparameters of the neural network, but the fallibilities themselves do not. In the interest of parsimony, we will assume throughout the rest of the paper, that \(\beta\) is independent of \(D\) and \(N\), and only \(\tau_{ij}^{0}\) varies with these quantities. In the next two sections, we will show that even under these relatively mild assumptions, the semantic landscape paradigm can account for several empirical results on scaling and emergence in deep neural networks. ## 3 Explaining emergent phenomena using semantic landscapes ### Grokking The term 'grokking' refers to a phenomenon in which neural networks exhibit delayed generalization, i.e. the validation accuracy remains low during training long after the training accuracy reaches 100%, before suddenly rising at late times. Grokking was first observed by Power et al. (2022) on small algorithmic datasets and has since been observed in a wide range of contexts (Liu et al. (2022); Murty et al. (2023)). A number of explanations for grokking have been proposed, based on the behavior of weight norms during training and testing. These include the slingshot mechanism(Thilak et al. (2022)),the LU model (Liu et al. (2022)), and competition between sparse and dense subnetworks (Merrill et al. (2023)). Grokking has also been described as a phase transition (Zunkovic and Ilievski (2022);Liu et al. (2022)) within the framework of representation learning. Nanda et al. (2023) have given a fascinating account of the sequence of learning stages that consists of memorization, circuit formation, and cleanup. Within the semantic landscape paradigm, whether a given network will exhibit grokking or not depends on the topography of the semantic landscape, which in turn depends on the task that the network is required to perform. If there are relatively few ways of performing the task correctly, and a large number of ways of performing it incorrectly, the semantic landscape will be characterized by one, or a few, very deep minima, surrounded by a sea of relatively shallow ones corresponding to high fallibility, and separated from them by semantic barriers. This type of semantic landscape topography is most likely to result in observable grokking. This is because starting from any of the high fallibility models, the network must be able to locate one of the few solutions that lead to low fallibility (and hence low test loss), which would generally take much longer than it takes to memorize the training data set. This follows from the intuition that memorization is the easiest heuristic model to adopt, as it requires only storage and retrieval of information with no additional processing. For example if a neural network is tasked with classifying dogs and pigeons, memorizing the training images would be easier than developing heuristics that involve formulating meaningful concepts such as "wings" or "legs" by extracting features from those images. This argument provides a simple intuitive explanation for why grokking is more likely to be observed on algorithmic datasets, and is supported by the finding that grokking on realistic datasets, when present, is weaker than on algorithmic ones (Liu et al. (2022)). Furthermore, since finding solutions that generalize well involves crossing semantic barriers, the semantic landscape paradigm predicts that the test loss during grokking should transiently increase with training steps before decreasing again, which has indeed been observed in studies of grokking (Power et al. (2022); Nanda et al. (2023)). The amount by which loss transiently increases contains information about the height of semantic barriers, and is likely to play a crucial role in future work aimed at connecting network weight norm dynamics to the heuristic models discussed here. An important observation regarding grokking is the presence of a critical training dataset size (Power et al. (2022)) below which the network is unable to generalize regardless of how long the training phase lasts. Can the semantic landscape paradigm account for the presence of such a critical datasize? To answer this question, we need to understand how training data affect dynamics on the semantic landscape. To begin with, the presence of training examples is what enables the network to hop between heuristic models in the first place. Supplying more data enables the network to access more models. Thus, the configuration space of heuristic models is a graph whose vertices correspond to heuristic models, and whose connectivity is determined by the training data. If multiple edges are present between two models, we can interpret this as an increased propensity for transitions between those two models. Conversely, if no edges are present between two models, it is impossible to transition between them. In the simplest approximation, we can think of the dataset size \(D\) as being proportional to the average degree of the graph of heuristic models. Let us assume that the network has adopted the heuristic model \(\mathcal{M}_{M}\) that corresponds to memorization of the training dataset. For sufficiently large \(D\), there is always a continuous path that connects \(\mathcal{M}_{M}\) to the model corresponding to the generalization solution \(\mathcal{M}_{G}\). However, as we reduce the dataset size, edges between models are progressively removed until we reach a critical dataset size \(D_{c}\) below which the graph becomes disconnected, and it is no longer possible to transition from \(\mathcal{M}_{M}\) to \(\mathcal{M}_{G}\) regardless of how long we run the training (Fig. 2). This implies that the model is incapable of generalization below a critical dataset size, as observed in previous work (Power et al. (2022); Liu et al. (2022)). Crucially,this simple argument implies that that the phase transition associated with grokking is a _percolation transition_ on the graph of heuristic models. The semantic landscape paradigm also successfully predicts the decrease of grokking time with dataset size above the critical threshold. Since the number of edges of the graph controls the probability of transitioning between nodes, the grokking rate \(\lambda_{g}\) is proportional to the "excess degree" of the graph above the percolation threshold, i.e. \(\lambda_{g}\propto(D-D_{c})\), and therefore vanishes at \(D=D_{c}\). Correspondingly, the grokking time \(t_{g}\propto 1/(D-D_{c})\). Quite remarkably, this scaling argument agrees with theoretical predictions as well as experimental observations in (Liu et al. (2022)), suggesting a correspondence between the two frameworks. ### Emergence of new capabilities with scale While grokking refers to the emergence of new capabilities in time, several studies have also reported the sudden emergence of new capabilities with increasing model size, i.e. the number of model parameters \(N\)(Brown et al. (2020); Rae et al. (2021); Austin et al. (2021); Wei et al. (2022); Michaud et al. (2023)). Like grokking, the unpredictable nature of emergence with scale simultaneously makes it one of the most challenging, as well as the most crucial problems in AI that need to be solved in the near future (Ganguli et al. (2022)). Thus far, relatively few studies have offered an explanation for abrupt emergence of new capabilities with scale. It has recently been proposed that grokking and double descent, a particular form of emergence with scale, might be related phenomena (Davies et al. (2023)). The most noteworthy effort in this direction is the quantization model of neural scaling (Michaud et al. (2023)) which explains emergence in terms of the sequential acquisition of discrete computational capabilities called 'quanta'. The semantic landscape paradigm shares some interesting similarities with the quantization model, which we discuss separately in Section \(5\). To explain emergence with scale, we need to understand how the semantic landscape is influenced by \(N\) Figure 2: **Grokking as a percolation phenomenon:** Schematic showing how grokking emerges within the semantic landscape paradigm. Grokking corresponds to a first passage process to reach model heuristic model \(\mathcal{M}_{G}\) starting from \(\mathcal{M}_{M}\) via intermediate semantic barriers corresponding to models with higher \(\mathcal{F}\), as indicated by the colorbar. This process is possible only when a continuous path between \(\mathcal{M}_{M}\) and \(\mathcal{M}_{G}\) exists (middle panel, thick blue lines), which happens only if \(D\geq D_{c}\). Increasing \(D\) further leads to faster grokking due to increased connectivity of the graph. Let us assume that the dataset size \(D\gg D_{c}\), so that we are not bottlenecked due to insufficient data. As discussed in Section 2, increasing \(N\) enhances the capacity of the network to formulate new propositions, and hence formulate new heuristic models. Thus, the effect of increasing \(N\) is to increase the number of nodes on the graph of heuristic models. If a particular node drops out of the network, all edges connecting it must also drop out, even if the dataset \(D\) is infinitely large. Physically, this means that the network is incapable of formulating that particular heuristic model, regardless of how many training examples it sees. Thus, if we consider the configuration space as the graph of _all possible nodes for all_\(N\), the nodes corresponding to models that can only be formulated for large \(N\) will remain disconnected at small \(N\). With this insight in mind, it is easy to see that emergence with scale corresponds to a percolation transition with increasing \(N\), such that a continuous path to models of low \(\mathcal{F}\) opens up only for sufficiently large \(N\) (Fig. 3). Percolation can also account for the distinct behaviors of per token loss in large language models (LLMs) observed by Michaud et al. (2023). Even if the network finds a heuristic model of lower \(\mathcal{F}\), particularly for complex tasks such as language prediction, it is likely that the improved model does significantly better on some but not all tokens. Some tokens may require the network to perform several successive jumps to increasingly lower \(\mathcal{F}\) states for them to be predicted accurately. Michaud et al. (2023) refer to these as "polygenic tokens". For complex tasks such as language prediction, the semantic landscape is likely to be rugged, consisting of a large number of minima and saddle points, which provides a natural explanation for the presence of polygenic tokens. ## 4 Explaining neural scaling laws using semantic landscapes Apart from unpredictable emergent phenomena, neural networks also exhibit predictable power law scaling with increasing \(D\), \(N\), as well as training steps \(S\), as mentioned in Section 1. The first important experimental observation to note is that neural network performance can be bottlenecked by \(D\) as well as \(N\), leading to diminishing returns when the other parameter is scaled (Kaplan et al. (2020)). This observation is already anticipated by the semantic landscape paradigm, as seen in the previous section. Specifically, if \(D\) is a limiting factor, performance plateaus because the network is _unable to find paths_ to good heuristic models, even if it has the capacity to formulate them (i.e. \(N\rightarrow\infty\)). If, on the other hand, \(N\) is a limiting factor, performance plateaus because the network is _unable to formulate_ a good heuristic model, even if the paths leading to it are in principle open (i.e \(D\rightarrow\infty\)). In the following discussion, we will therefore assume that \(D\) as well as \(N\) are both sufficiently large, such that the graph of heuristic models is always connected. Figure 3: **Emergence with scale as a percolation phenomenon: Left panel: For small \(N\), the network is incapable of formulating the heuristic models \(\mathcal{M}_{1}\),\(\mathcal{M}_{2}\), and \(\mathcal{M}_{3}\) (grey), and hence, the associated paths (dashed grey lines) are forbidden. Right panel: Increasing \(N\) allows the network to formulate model \(\mathcal{M}_{2}\), which opens a path towards low fallibility models, (thick blue lines), and enables the network to learn new capabilities.** ### The semantic trap model To derive neural scaling laws within the semantic landscape paradigm, we have to make assumptions about the topography of the semantic landscape. In the case of LLMs, on which most of the work on neural scaling has focused, we expect that the semantic landscape is very rugged, much like the energy landscapes of disordered systems such as structural (Debenedetti and Stillinger [2001]) as well as spin glasses (Mezard et al. [1987]). At low temperatures, the ruggedness, or the preponderance of metastable energy minima causes the dynamics on such landscapes to proceed sluggishly via activated hops in a non-stationary manner, a process known as aging (Berthier and Biroli [2011]). One would expect similar dynamics on the semantic landscape of LLMs, as progress towards models with lower \(\mathcal{F}\) would presumably be impeded by a large number of semantic barriers. Taking inspiration from Bouchaud's trap model for glassy dynamics (Bouchaud [1992]), we define the analogous semantic trap model for LLMs. We assume that dynamics on the semantic landscape consists of activated hops across large semantic barriers \(\Delta\mathcal{F}_{i}\) to models with lower \(\mathcal{F}\), interspersed with excursions along relatively flat directions that don't affect \(\mathcal{F}\) significantly. We further assume that the fallibility \(\mathcal{F}\) of the network after \(N_{b}\) barrier crossings is inversely proportional to \(N_{b}\). This model is schematically illustrated in Fig. 4. Analogous to the trap model (Bouchaud [1992]), we assume that these large semantic barriers have an exponential distribution \[\rho(\Delta\mathcal{F})=\beta\rho_{0}\mathrm{exp}\left(-\beta\mu\Delta \mathcal{F}\right) \tag{4}\] where \(\rho_{0}\),\(\beta\), and \(\mu\) are constants. In the trap model, \(\mu=T/T_{g}\), where \(T\) is the temperature, and \(T_{g}\) is the glass transition temperature. An exponential distribution of barrier heights is known to emerge in a number of disordered systems (Monthus and Bouchaud [1996]) and is thought to follow from extreme event statistics (Bouchaud and Mezard [1997]). Together with the activated form for dynamics on the semantic landscape (Eqn.3), Eqn.4 implies a waiting time distribution \(\psi(\tau)\) that has a power law tail for long waiting times of the form \[\psi(\tau)=\psi_{0}\tau_{0}^{\mu}\tau^{-(1+\mu)} \tag{5}\] where \(\psi_{0}\) is a normalization constant (Bouchaud [1992]). As each semantic barrier is crossed, the waiting time to cross that barrier is sampled independently, and thus, the time taken to cross \(N_{b}\) barriers is given by \(t(N_{b})=\sum_{i}^{N_{b}}\tau_{i}\), which results in the scaling \(t(N_{b})\propto\tau_{0}N_{b}^{1/\mu}\)(Bouchaud and Georges [1990]). Assuming that time is measured in the number of training steps \(S\), and using \(\mathcal{F}\propto 1/N_{b}\), we obtain the following scaling form of the fallibility with training time \[\mathcal{F}(S)=\left(\frac{S_{0}}{S}\right)^{\mu} \tag{6}\] where \(S_{0}\) is a constant. The inverse proportionality between \(\mathcal{F}\) and \(N_{b}\) marks a crucial difference between Bouchaud's trap model and the semantic trap model introduced here. Bouchaud's model corresponds to a _renewal process_ in which the system loses memory completely following an activated hop, and the energy of the new state is chosen randomly Figure 4: **Schematic illustration of the semantic trap model.** after every hop. By contrast, the semantic trap model predictably reduces its fallibility by an amount proportional to \(1/N_{b}^{2}\) for the \((N_{b}+1)^{\rm th}\) activated hop. To obtain the scaling of fallibility with dataset size \(D\) and number of parameters \(N\), first recall our assumption in Section \(2\) that fallibility, and therefore semantic barriers themselves, are independent of \(D\) and \(N\). The scaling form is therefore determined by the dependence of \(\tau_{0}\) on \(D\) and \(N\). In the semantic trap model, \(1/\tau_{0}\) is the attempt frequency for activated hops towards lower \(\mathcal{F}\), and \(\tau_{0}\) therefore corresponds to the time over which the network explores the sub-graph associated with flat regions of the semantic landscape between hops (horizontal rows of nodes in Fig. 4). From the discussion in Section \(3.1\) as well as Fig. 2, \(D\) is proportional to the connectivity of the sub-graph. Thus, the rate \(\gamma\) at which the sub-graph is traversed increases with \(D\). From Section \(3.2\) and Fig. 3, we see that \(N\) increases the number of heuristic models that the network can formulate. Thus, increasing \(N\) increases the number of paths by which the network can find models with lower \(\mathcal{F}\). Thus, the typical distance \(\ell\) between nodes on the sub-graph that the network has to traverse before it can hop towards a model with lower \(\mathcal{F}\) decreases with \(N\). For a dynamical process with exponent \(\alpha\) on the sub-graph, therefore, we have \(\ell(N)\propto(\gamma(D)t)^{\alpha}\). The exponent \(\alpha\) characterizes the nature of the process by which the network navigates the graph of heuristic models. Thus, we have \(\alpha=1\) for ballistic dynamics, \(\alpha=0.5\) for diffusive dynamics, \(0<\alpha<0.5\) for sub-diffusive dynamics, and \(\alpha>0.5\) for super-diffusive dynamics. Since the network representation takes a finite time to evolve even in the infinite data limit, we assume \(\gamma(D)=\gamma_{\infty}D/(D+D_{h})\). Similarly, not all models will lead to lower fallibility even for an infinitely large network, and so we have \(\ell(N)=\ell_{0}/N^{1/d_{f}}+\ell_{\infty}\), where \(d_{f}\) is the fractal dimension of the sub-graph. This scaling form is motivated by the intuition that if \(n\) points are randomly distributed within a \(d\) dimensional volume, the typical distance between them scales as \(n^{-1/d}\). Thus, as \(D\to\infty\), we have \(\tau_{0}\propto N^{-1/(\alpha d_{f})}\). Furthermore, as \(\mathcal{F}\propto 1/N_{b}\), and \(N_{b}\propto\tau_{0}^{-\mu}\), we have the following scaling law as \(D\to\infty\) \[\mathcal{F}(N)=\left(\frac{N_{0}}{N}\right)^{\mu/(d_{f}\alpha)} \tag{7}\] where \(N_{0}\) is a constant. Finally, as \(N\to\infty\), we get \(\tau_{0}\propto 1/D\), which yields the scaling law \[\mathcal{F}(D)=\left(\frac{D_{0}}{D}\right)^{\mu} \tag{8}\] where \(D_{0}\) is a constant. Eqns. 7 and 8 hold as long as \(D_{c}\ll D\ll D_{k}\), and \(N_{c}\ll N\ll(\ell_{0}/\ell_{\infty})^{d_{f}}\), which is consistent with the range over which typical LLMs operate (Kaplan et al. [2020]). Strikingly, from Eqns. 6 and 8, we see that the semantic trap model predicts that the exponents for dataset scaling and training time scaling should be identical, which is also predicted by the quantization model (Michaud et al. [2023]) despite starting from a different set of assumptions, which suggests intriguing connections between the two frameworks. ### Interpretation of scaling exponents While the semantic trap model is possibly the simplest choice that one can make to describe learning on a rugged semantic landscape, it is certainly possible to construct more sophisticated models. However, despite its simplicity, the semantic trap model can recover a wide range of empirically observed scaling laws (Villalobos, Michaud et al. [2023]), as the exponents for \(D\) and \(N\) scaling are independent. In the simplest case, if the landscape is smooth and there are no semantic barriers to cross, the scaling laws are independent of \(\mu\). In this case, for ballistic motion \(\alpha=1\) on a regular 2D graph with \(d_{f}=2\), one recovers the variance limited scaling for deep neural networks (\(\mathcal{F}(D)\propto 1/D\) and \(\mathcal{F}(N)\propto 1/\sqrt{N}\)), while \(\alpha=1\), \(d_{f}=1\) yields the scaling for linear networks (\(\mathcal{F}(D)\propto 1/D\) and \(\mathcal{F}(N)\propto 1/\sqrt{N}\)) (Bahri et al. [2021]). We hope that future works will explain this correspondence by systematically coarse-graining from weights to propositions. Comparing \(D\) scaling with that observed in large language models (Kaplan et al. [2020]), we get \(\mu\approx 0.1\). In the trap model (Bouchaud [1992]), \(\mu<1\) corresponds to temperatures lower than the glass transition temperature, where dynamics are extremely sluggish and the system exhibits aging. Curiously, molecular dynamics simulations of glasses have reported that the lowering of potential energy with time is compatible with a power law decay with a small exponent \(\approx 0.144\)(Kob and Barrat [1997]). Since the scaling exponent for \(D\) and time is the same in the semantic trap model, the similarities in the observed scaling in glassy systems and the semantic trap model is quite suggestive, and merits further exploration. Discussion In the preceding sections, we have introduced and formally defined the notions of heuristic models, fallibility and the semantic landscape. We have demonstrated that a wide range of neural network phenomena including grokking, emergence with scale, and various forms of neural scaling, which have hitherto been explained using disparate theoretical frameworks, find natural and consistent explanations within the semantic landscape paradigm under fairly general assumptions. The main advantage of invoking the semantic landscape paradigm is that neural network phenomena can be recast in terms of well studied problems in statistical physics, such as percolation, spin glasses, and random walks on graphs. By construction, the semantic landscape paradigm interpolates between top-down empirical observations on the high level mechanistic interpretation of neural network function (Nanda et al. [2023], Chughtai et al. [2023]) obtained via reverse engineering, and bottom-up approaches based on coarse-graining over network parameters Roberts et al. [2022]. Naturally, at this incipient stage, a discussion of how the semantic landscape paradigm can complement existing approaches, and how it can be developed further, is is order. We therefore conclude this paper with a series of comments that elaborate on these questions and their potential answers. ### How can we tell whether a phenomenon is emergent or not? Since much of this paper was devoted to explaining emergent phenomena, it is worth addressing an interesting contrarian perspective on emergence that was put forward by Schaeffer et al. [2023]. The authors show that different loss curve can show either smooth or abrupt evolution with network size depending on the metric used for quantifying loss. Based on this, they argue that emergence may not be a fundamental property of scaling neural network models. We advocate the viewpoint that emergence must necessarily refer to the emergence of new capabilities, and is therefore completely independent of loss metrics. Some metrics could be sensitive to emergence, and others could be insensitive. By contrast, an abrupt change in particular loss metrics doesn't by itself imply that a new capability has emerged. The only real way to determine whether a new capability has emerged is to look under the hood and figure out what has changed in the network's learned representations. Such _mechanistic interpretability_(Nanda et al. [2023]) analysis might be complicated for complex tasks, not least because the network might be acquiring several new capabilities simultaneously in a way that makes the average loss curve appear smooth (Nanda and Lieberum [2022]). However, we strongly believe that real progress on understanding emergence can only be made via deciphering the logic developed within the network's internal representations during learning. ### Semantic landscapes and the quantization model The quantization model (Michaud et al. [2023]) is a theoretical framework that is capable of explaining emergence as well as scaling in neural networks. It begins with three hypotheses that can briefly be described as follows: QH1) Neural networks learn a number of discrete computations called 'quanta' that are instrumental in reducing loss. QH2) Quanta are learned in order of their usefulness, with more important quanta being learned first. The priority order for learning quanta can therefore be specified in terms of the 'Q-sequence'. QH3) The frequency with which quanta are used decays as a power law in the quanta's index in the Q-sequence, which ultimately explains neural scaling laws. The semantic trap model has many interesting similarities, and also some interesting differences with the quantization model. Since heuristic models are algorithms in the network's internal formal language, a heuristic model with lower \(\mathcal{F}\) will necessarily contain more quanta. The semantic trap model assumes that \(\mathcal{F}\) scales inversely with the number \(N_{b}\) of hops that reduce \(\mathcal{F}\). This effectively implies that more useful quanta are learned earlier, as hypothesized in QH2. However, we note that for the scaling relations in Eqn.6-8 to hold, the inverse relationship between \(\mathcal{F}\) and \(N_{b}\) only needs to hold on average, and a weaker version of QH2 that admits occasional disruptions of the Q-sequence should be sufficient. Importantly, the semantic landscape paradigm provides a rigorous definition of the "usefulness" of quanta. The "usefulness" of a quantum \(q\) is exactly equal to the reduction in \(\mathcal{F}\) after \(q\) has been learned, which is in turn proportional to the number of additional tokens that can be predicted with improved accuracy. This automatically explains why less useful quanta are also ones that are less frequently used. In future work, it would be exciting to ask whether the usefulness of a particular quantum depends on the presence or absence of other quanta, akin to epistatic interactions between mutations during evolution (Wolf et al. [2000]). Unlike QH3, the semantic trap model does not explicitly assume power laws. Instead, a power law waiting time distribution follows from the assumption of an exponential distribution of semantic barriers. While this choice is still purely phenomenological, it finds precedent in a variety of disordered systems (Monthus and Bouchaud [1996]), and could potentially be quite prevalent due to its origin in large deviations theory (Bouchaud and Mezard [1997]). Crucially, neural scaling in the semantic landscape paradigm follows from the structure of the semantic landscape, which is at least in principle, formally derivable from network weights. Semantic landscapes therefore serve as a consistent conceptual foundation for making controlled hypotheses in different settings, and can be easily generalized to different network architectures and tasks. ### Connecting loss landscapes to semantic landscapes As noted earlier, the loss landscape picture has proved to be invaluable for our current understanding of neural networks (Mehta et al. (2019); Spigler et al. (2019); Baity-Jesi et al. (2018); Bahri et al. (2021); Roberts et al. (2022)). From a theoretical perspective, loss landscapes will also prove to be extremely valuable in the construction of semantic landscapes, via systematic coarse-graining of neural weights. In particular, such studies will help assess for which architectures and learning algorithms the assumption of Arrhenius-like transition rates remains valid. However, we stress that several predictions of the semantic landscape paradigm, including the existence of a critical dataset size for grokking, the emergence of new capabilities with scale, as well as power law scaling of loss with network size and dataset size, are determined by the topology of the graph of heuristic models, and do not rely on particular choices of transition rates. The Arrhenius form of transition rates is primarily essential to obtain nontrivial exponents for data scaling. Examining these findings through the lens of the loss landscape will go a considerable way in developing and refining the semantic landscape paradigm in the coming years. ### Mechanistic interpretability and the semantic landscape paradigm The semantic landscape paradigm provides a consistent and versatile conceptual framework that will prove to be particularly useful for studies that aim to gain mechanistic insights into the learning dynamics of neural networks. Within the last couple of years, a growing number of studies have reverse engineered neural networks to discover circuits that implement a series of computations that enable the network to generalize beyond the training dataset (Nanda and Lieberum (2022); Wang et al. (2022); Hernandez et al. (2022); Chughtai et al. (2023); Lindner et al. (2023); Conmy et al. (2023); Zhong et al. (2023)). As this burgeoning field grows and develops further, it would benefit immensely from a framework that keeps track of the training process not in terms of abstract functions of neural weights, but in terms of well-defined and interpretable emergent algorithms. This is exactly what the semantic landscape paradigm offers, by envisioning the training process as a trajectory on the graph of learned algorithms. Furthermore, as emergent algorithms must ultimately derive from weights, mechanistic interpretability studies will be invaluable in developing the semantic landscape paradigm into a more rigorous theory, as they can simultaneously keep track of the microscopic as well as macroscopic aspects of learning. Studies that combine theoretical ideas contained in the semantic landscape paradigm and empirical approaches employed in mechanistic interpretability studies will prove to be instrumental in elucidating the intricacies of neural network learning, thereby paving the way for developing stronger and more efficient AI systems. ### Is scale everything in AI? Ganguli et al. (2022) have astutely noted that while the test loss of large generative models improves with increased computing resources and loosely correlates with improved performance on many tasks, specific capabilities cannot be predicted ahead of time. Crucially, while the predictable aspects of performance drive development and deployment of these models, the unpredictable aspects make it difficult to anticipate the consequences of such development and deployment. One could reasonably ask if this is necessarily such a big problem. Indeed, one could argue that this problem might disappear if we simply scale our neural networks up sufficiently, so that whatever capabilities they are currently lacking will be discovered by their scaled up versions. There is no doubt that _quantitatively_ speaking, networks with more parameters will acquire more capabilities. However, there is no guarantee that the _qualitative_ capabilities that we desire will be found by simply scaling up. In glass physics, the potential energy of glassy states is only slightly higher than that of the global free energy minimum corresponding to the crystalline state (Debenedetti and Stillinger (2001)), and yet glasses differ from crystals significantly in several important properties. Furthermore, it is virtually impossible for glasses to 'find' the crystalline state, in the absence of external intervention. If the semantic landscape of large generative models resembles the energy landscape of a glass, they may very well encounter a similar problem. As a corollary, the test loss alone may not be the best indicator of the network's performance. Scaling up the network may not solve this problem because even if the larger network is capable of discovering a solution that the smaller one could not, it may not be able to 'find' that solution during its training. Indeed, one may have to tune hyperparameters, engineer prompts, or implement an entirely different, perhaps brain-like (Liu et al. (2023)) learning algorithm for it to acquire the desired capability. Ultimately, if we do not understand the algorithms that the network is learning to implement, we would not be able to assess how the network's training process should be modified to achieve desired results. And it is perhaps in solving this critical problem that the confluence of mechanistic interpretability studies and the semantic landscape paradigm will find its finest hour. ## Acknowledgements The author thanks Jeff Gore, Sarah Marzen, Chih-Wei Joshua Liu, Yizhou Liu, Jinyeop Song, Jiliang Hu, and Yu-Chen Chao for illuminating discussions and feedback on the manuscript, as well as Amit Nagarkar and Satyajit Gokhale for a critical reading of the manuscript. The author acknowledges the Gordon and Betty Moore Foundation for support as a Physics of Living Systems Fellow through Grant No. GBMF4513.
2308.06780
Neural Networks at a Fraction with Pruned Quaternions
Contemporary state-of-the-art neural networks have increasingly large numbers of parameters, which prevents their deployment on devices with limited computational power. Pruning is one technique to remove unnecessary weights and reduce resource requirements for training and inference. In addition, for ML tasks where the input data is multi-dimensional, using higher-dimensional data embeddings such as complex numbers or quaternions has been shown to reduce the parameter count while maintaining accuracy. In this work, we conduct pruning on real and quaternion-valued implementations of different architectures on classification tasks. We find that for some architectures, at very high sparsity levels, quaternion models provide higher accuracies than their real counterparts. For example, at the task of image classification on CIFAR-10 using Conv-4, at $3\%$ of the number of parameters as the original model, the pruned quaternion version outperforms the pruned real by more than $10\%$. Experiments on various network architectures and datasets show that for deployment in extremely resource-constrained environments, a sparse quaternion network might be a better candidate than a real sparse model of similar architecture.
Sahel Mohammad Iqbal, Subhankar Mishra
2023-08-13T14:25:54Z
http://arxiv.org/abs/2308.06780v1
# Neural Networks at a Fraction with Pruned Quaternions ###### Abstract Contemporary state-of-the-art neural networks have increasingly large numbers of parameters, which prevents their deployment on devices with limited computational power. Pruning is one technique to remove unnecessary weights and reduce resource requirements for training and inference. In addition, for ML tasks where the input data is multi-dimensional, using higher-dimensional data embeddings such as complex numbers or quaternions has been shown to reduce the parameter count while maintaining accuracy. In this work, we conduct pruning on real and quaternion-valued implementations of different architectures on classification tasks. We find that for some architectures, at very high sparsity levels, quaternion models provide higher accuracies than their real counterparts. For example, at the task of image classification on CIFAR-10 using Conv-4, at 3% of the number of parameters as the original model, the pruned quaternion version outperforms the pruned real by more than 10%. Experiments on various network architectures and datasets show that for deployment in extremely resource-constrained environments, a sparse quaternion network might be a better candidate than a real sparse model of similar architecture. ## 1 Introduction A key attribute of any neural network architecture is the number of trainable parameters that it has. In general, the greater the number of model parameters, the greater its demands on computational power, time, and energy to train and perform inference. Contemporary state-of-the-art deep neural networks have model parameters that often run into tens or even hundreds of millions [24, 36, 5], imposing great demands on the hardware needed to train these models. There are several real-world scenarios where we would like to deploy well-performing models to edge devices such as mobile phones. An example would be when dealing with private user data such as images, where sending data to a back-end datacenter for inference would be less than ideal [42]. The best (and biggest) models cannot be run on these resource-constrained computing environments [16]. This leaves us with two options - either use smaller, more specialized architectures (as in MobileNet [16]) or find ways to compress the bigger ones. There are multiple compression methods by which we can reduce the resource consumption of a model such as parameter pruning and sharing, low-rank factorization and knowledge distillation [3], but in this work, we focus on pruning. Pruning is a method to reduce the number of parameters in a model by removing redundant weights or neurons [20]. Various studies have shown that pruning can drastically reduce model parameter counts while still maintaining accuracy [13; 23; 2]. These pruned models can also be re-trained [7], helping to reduce resource utilization during the training stage as well. What happens when we prune a model to extreme levels of sparsity, say 90% or more? At this level, we typically see that the accuracy drops off [7; 13], and that the pruned model can no longer match the original model. For this reason, most studies on pruning only prioritize up until the point where the pruned model is no longer on par with the parent [7; 13; 20; 23]. However, we feel that this regime is still attractive because various empirical studies have found that a large-pruned model consistently does better than a small-dense model of equal size [42; 22; 12]. Thus a state-of-the-art model pruned to just 2% of its original size might still provide better accuracy than a miniature model of comparable size, even though the pruned model displays lower accuracy than the original. Recently, another method of reducing model parameters is undergoing a surge in popularity. Using higher-dimensional data embeddings, such as complex numbers or quaternions, has been successfully shown to reduce model parameters while maintaining accuracy [39; 38; 10; 27]. Quaternions are a 4-dimensional extension to the complex numbers introduced by the mathematician William Rowan Hamilton in 1843 [27], and quaternion neural networks have been built for a variety of ML tasks [43; 29; 10; 28; 32; 4]. Converting a real model to quaternion can lead to a 75% reduction in model parameters (which is explained in more detail in Sec. 3.2), making it a suitable method for model compression. In the present work, we employ pruning on quaternion networks to see if they have any advantages over their real counterparts at high levels of model sparsities. To the extent of our knowledge, there are no prior studies that explore weight-reduction in neural networks by combining pruning with quaternion-valued neural networks (or any other higher-dimensional data structure). We choose multiple neural networks for image classification on the MNIST [19], CIFAR-10 and CIFAR-100 [18] datasets, build equivalent quaternion representations, and conduct pruning experiments on both real and quaternion implementations. We find that at extreme sparsities (approximately 10% or fewer parameters as the real, unpruned model), the quaternion model outperforms the real. Thus for deploying in a resource-constrained device, a quaternion pruned model might provide the best accuracy out of all available options. Specifically, the contributions of this work can be summarized as follows. We conduct pruning experiments on real and quaternion-valued implementations of different network architectures. Through this we show that 1) the lottery ticket hypothesis [7] is valid for quaternion models, meaning that pruned quaternion models can be re-trained from scratch to the same accuracy as the unpruned model, and 2) at very high model sparsities, the quaternion equivalent displays higher accuracy than the real network. Related Work ### Pruning From the early 1990s, we have known that the majority of weights in a trained neural network can be pruned without sacrificing its accuracy [20, 34, 14]. Earlier works were done on simple architectures, but recently pruning has also been shown to work on much more extensive architectures such as VGG [37] and ResNet [15, 7, 2]. These studies showed that modern state-of-the-art architectures are often heavily over-parameterized and that they only require a far fewer number of parameters to learn the necessary function representations [7]. In most studies, authors generally attempt to prune after the training process, at the end of which the weights would be ordered based on their contribution to the output. One common example of a heuristic for such ordering is the weight magnitude, where the contribution of individual weights to the output are judged based on their absolute values [13]. However, when training pruned networks from scratch, often they could not match the original accuracy, and the pruned models did much worse [13, 23]. The 'Lottery Ticket Hypothesis' paper [7] showed that these pruned networks could indeed be trained from the beginning, but we had to be pair the model with the initial weights for the unpruned network. This work showed that the benefits from pruning could be realized during the training process, which could potentially reduce the resource requirements of training by a huge amount. Pruning is generally of two types. In structured pruning, weights are pruned in groups by removing whole neurons or entire channels [35, 23]. Structured pruning leads to a reduction in the size of the model and improves model inference speeds as the dimensions of weight matrices are reduced [25]. Unstructured pruning, by contrast, is where individual weights of neurons are removed instead of entire neurons or channels [20, 13]. While unstructured pruning reduces the number of parameters, this does not immediately manifest itself through an improvement in inference speeds [30]. This is because unlike structured pruning, here weight matrices remain Figure 1: Depiction of how using different linear combinations of coefficients in the Hamilton product results in a reduction in the number of weights in a neural network. Image copyright Tituan Parcollet [29], reproduced with permission. the same dimensions but are instead simply made sparse, which current hardware technology is not capable of optimizing [30]. However, empirical studies have shown that unstructured pruning often yields much better results than structured [35]. In addition, with both pruning methods, the resource demands for training are reduced because we now have a far smaller set of weights that we need to optimize. Research to optimize sparse operations on current hardware has shown promising results [6], and hardware that can accelerate sparse-matrix multiplications are being built [33]. This suggests that all the predicted theoretical performance gains from pruning could soon be realized in the near future. An appealing property of pruned networks, and the one that justifies this work, is that in general, a large-sparse (pruned) model performs better than a small-dense (unpruned) one with an equal number of weights [2]. Multiple studies have shown that for a variety of model architectures on different tasks, sparse models consistently outperform their dense counterparts [42, 22, 17, 12]. This implies that given a resource constraint on the size of the models that a user can run, their best bet at achieving the greatest possible accuracy would be to use a large-pruned model rather than a small-dense one. This is applicable even when the pruned model cannot match the accuracy of the original, unpruned model. ### Quaternions In recent years, there has been a marked increase in works that address the question of whether it is more optimal to use multi-dimensional data embeddings in applications where the input data is multi-dimensional (refer [27] for a comprehensive review). For example, Trabelsi et al. [38] demonstrated that at the task of music transcription, where the input signal is two-dimensional (consisting of magnitude and phase of the signal), complex-valued neural networks outperformed comparable real models. Complex numbers, however, are insufficient to represent higher-dimensional inputs such as the three channels of a color image, which is why some studies extended this idea to four-dimensional quaternions. Zhu et al. [43] compared quaternion and real-valued convolutional neural networks (henceforth referred to as \(Q\) and \(R\) respectively) with similar architectures and the same number of parameters on the CIFAR-10 dataset. They found that \(Q\) achieved faster convergence on the training loss as well as higher classification accuracy on the test set compared to \(R\). Gaudet and Maida [10] made a similar comparison with image classification on the CIFAR-10 and CIFAR-100 datasets and image segmentation on the KITTI Road Segmentation dataset [9], but this time with \(Q\) having a quarter of the number of parameters as \(R\). They reported that on both tasks, quaternion models gave higher accuracy than real and complex networks while having a lower parameter count. Similar advantages for quaternion neural networks over real networks were also found by Parcollet et al. [29] for speech recognition. Theory of Quaternions ### Quaternion Algebra Quaternions are a four-dimensional extension to the complex numbers, and a general quaternion \(q\) may be written as \[q=r+x\mathbf{i}+y\mathbf{j}+z\mathbf{k} \tag{1}\] where \(r,x,y,z\in\mathbb{R}\) and \(\mathbf{i}\), \(\mathbf{j}\) and \(\mathbf{k}\) are complex entities which follow the relations \[\mathbf{i}^{2}=\mathbf{j}^{2}=\mathbf{k}^{2}=\mathbf{ijk}=-1 \tag{2}\] Given two quaternions \(q_{1}\) and \(q_{2}\), their product (known as the Hamilton product) is given by \[q_{1}\otimes q_{2}= (r_{1}r_{2}-x_{1}x_{2}-y_{1}y_{2}-z_{1}z_{2}) \tag{3}\] \[(r_{1}x_{2}+x_{1}r_{2}+y_{1}z_{2}-z_{1}y_{2})\ \mathbf{i}\] \[(r_{1}y_{2}-x_{1}z_{2}+y_{1}r_{2}-z_{1}x_{2})\ \mathbf{j}\] \[(r_{1}z_{2}+x_{1}y_{2}-y_{1}x_{2}-z_{1}r_{2})\ \mathbf{k}\] Unlike real and complex multiplication, quaternion multiplication is not commutative. Quaternions can be represented as \(4*4\) real matrices such that the matrix multiplication between such representations are consistent with the Hamilton product. \[q=\begin{bmatrix}r&-x&-y&-z\\ x&r&-z&y\\ y&z&r&-x\\ z&-y&x&r\end{bmatrix} \tag{4}\] This is the representation that is used to calculate quaternion products in a quaternion-valued neural network. ### How does weight-reduction happen? Consider a small section of a fully-connected neural network with four input and four output neurons. In the case of a real-valued implementation, this layer would require \(4*4=16\) weights. However, if we were to view the above network as consisting of one input quaternion and one output quaternion, then using the Hamilton product, we would only require one quaternion weight, or four real weights, to connect them. Thus provided the number of neurons in all layers are divisible by 4, we can obtain a 75% reduction in the number of parameters of a network by converting it to a quaternion implementation. This is shown graphically in Fig. 1. This weight-reduction is explained in greater detail in [29, 27]. ## 4 Methodology Our experiments were run on classification tasks on the MNIST [19], CIFAR-10 and CIFAR-100 [18] datasets. We chose to demonstrate our experiments at image classification tasks following the lead of some of the most important works in network compression [7, 8] so that our results may be contrasted with state-of-the-art. We used the fully-connected Lenet-300-100 architecture from [21], and the Conv-2, Conv-4, and Conv-6 convolutional models from [7]. Complete details about model architectures and hyper-parameters are given in Tab. 1. Our goal in this work was to compare pruning on real and quaternion-valued implementations of different architectures. To do this, we first constructed quaternion equivalents of every model with the condition that they both have the same number of real neurons. \(\boldsymbol{Q}\) would thus have one-fourth the number of quaternion neurons (since four real neurons constitute a quaternion neuron) and one-fourth the number of weights as \(\boldsymbol{R}\) (because of the weight-sharing property of the Hamilton product explained in Sec. 3.2). We used a real output layer for \(\boldsymbol{Q}\) because for the MNIST and CIFAR-10 datasets, the number of output labels (10) is not divisible by 4. An alternative option here was to use 10 quaternion neurons for the output layer and then take their norms. We chose not to do this because this would break the equality condition that we just stated, that the number of real neurons across implementations be equal. All other network layers were replaced by equivalent quaternion implementations. The hyper-parameters used for training are also the same as [7] in order to make direct comparisons. For the MNIST dataset, since the images are grayscale and have only one channel, the input image is flattened and each set of four pixels are fed to a quaternion neuron of Lenet-300-100. On the other hand, for color vision tasks such as classification on the CIFAR-10, it makes more sense to treat the RGB channels of each pixel as belonging to a single quaternion neuron. To do this, we need to add one more channel to the input images, and the equality condition has to be relaxed for the input channel. A few different options exist, such as a channel with all 0s or using an additional layer to learn the fourth channel [10]. For our implementation, we chose to use the grayscale values of the input image as the additional channel. For pruning experiments, we use iterative pruning, where the model is pruned and \begin{table} \begin{tabular}{l c c c c} \hline \hline Model & Lenet-300-100 & Conv-2 & Conv-4 & Conv-6 \\ \hline & & & CIFAR-10 & CIFAR-10 \\ Datasets & MNIST & CIFAR-10 & CIFAR-100 & CIFAR-100 \\ \hline & & & & \(64*64\), pool \\ & & & \(64*64\), pool & \(128*128\), pool \\ Conv Layers & & \(2*64\), pool & \(128*128\), pool & \(256*256\), pool \\ \hline FC Layers & 300, 100, 10 & 256, 256, 10/100 & 256, 256, 10/100 & 256, 256, 10/100 \\ \hline All/Conv Weights (Real) & 266.6K & 4.30M/38K & 2.42M/260K & 2.26M/1.14M \\ \hline All/Conv Weights (Quat) & 67.7K & 1.08M/9.9K & 609K/65K & 569K/287K \\ \hline Training epochs/Batch size & 40/60 & 40/60 & 40/60 & 60/60 \\ \hline Optimizer/Learning rate & Adam/1.2e-3 & Adam/2e-4 & Adam/3e-4 & Adam/3e-4 \\ \hline \hline \end{tabular} \end{table} Table 1: Model architectures, datasets and hyper-parameters tested in this paper. The number of weights for Conv-2,4,6 are reported for CIFAR-10 classification. Architectures and hyper-parameters have been kept the same as those used in [7] to facilitate a direct comparison. then re-trained for a certain number of epochs over multiple iterations. We prune the networks using a global pruning technique with 20% of the weights in the model pruned in each pruning iteration. We chose global pruning over layer-wise pruning because global-pruning can find smaller lottery tickets for larger networks [7], and we wanted to keep the methodology consistent across all the architectures that we test. We only prune weights and exempt biases, as biases constitute only a tiny proportion of the total parameters in a model. At each level of model sparsity, the pruned model is re-trained from scratch using the initial weights for the same duration as the original model (as done in [7]), and it is the accuracy of these re-trained pruned models that we have reported throughout this paper. Our emphasis is on re-trainable pruned models because our primary concern is with the practical benefits of pruning conferred during training. The models are pruned until their accuracies drop below a threshold (which we chose to be 30%) for two successive pruning iterations. This was done to save computational resources by preventing pruning the model beyond the point where it is of any practical use. Pruning experiments were conducted using PyTorch [31]. PyTorch implementations of the various operations and building blocks necessary to build quaternion convolutional neural networks have been borrowed from the hTorch library (MIT License) [41]. This library uses a split activation function [1] where ReLU [11] is applied individually to the four different components of each quaternion. The backpropagation algorithm employed for quaternions is a generalization of those for real and complex networks [26, 28, 38]. All experiments were carried out on a single Nvidia RTX 3090 GPU. ## 5 Experimental Results A few previous studies using quaternions had found that at certain tasks, quaternions can outperform real networks while having an equal or smaller number of parameters, including for computer vision tasks such as classification which we consider in this paper [43, 10]. In our experiments, where \(Q\) has a quarter of the number of parameters as \(R\), we found that \(Q\) cannot, in general, be trained to the same accuracy as \(R\) within a fixed number of iterations. The training results for real and quaternion implementations of the various models are reported in Fig. 2. The relative accuracy difference between real and quaternion implementations are different for different models. We also found that this varied depending on the hyper-parameters used, and so it may be possible that for some set of hyper-parameters, \(Q\) could be made as accurate as \(R\), reproducing the results given in the references mentioned earlier. We were unable to find this subset in our experimentation. On re-training pruned models from scratch, we found that just like \(R\), pruned \(Q\) are capable of matching or exceeding the accuracy of the original, unpruned model (Fig. 3). For example, in Fig. 3a, a pruned model with 51% weights ends training with higher accuracy than the unpruned model. This result held for all six architecture-dataset pairs that we tested. This shows that the lottery ticket hypothesis is valid for quaternion-valued networks as well. The accuracies of pruned \(Q\) and \(R\) at various model sparsities are given in Fig. 4. On examining this figure, certain patterns become evident. Consider Conv-4 on CIFAR-10 (Fig. 4c) as an illustrative example. The curve for \(Q\) starts at the 25% Figure 2: Training results for various architectures for real and quaternion implementations. Results are the mean over 5 trials, and the error bars are the standard deviation. Figure 3: Training results for quaternion implementations of models at different sparsities. Pruned models have been re-trained from scratch with the intial weights. Plot labels are the percentage of weights remaining after pruning. Results are the mean over 5 trials, and the error bars are the standard deviation. Figure 4: Accuracy vs sparsity results for real and quaternion implementations of various architectures. Quaternion curves start at around the 25% mark because unpruned \(Q\) have only one-fourth (approx.) the number of parameters as the corresponding unpruned \(R\). Models are pruned until their accuracies drop below 30% for two successive pruning iteraions. mark as by construction, \(Q\) only has that many parameters compared to \(R\). \(Q\) also starts out with lower accuracy than \(R\), which can also be seen in Fig. (c)c. The regime we are interested in is that of high pruning rates, at around 10% of total weights or lower. Though both models start out at different accuracies, as we get to the 12.5% mark, the curves for \(R\) and \(Q\) coincide, meaning that at this sparsity level, both perform equally well. On further pruning, the \(R\) curve drops below the \(Q\) curve, implying that the quaternion outperforms the real model at very high sparsity levels. At about 3.12% sparsity, \(Q\) shows close to 75% accuracy while it is around 62% for \(R\), a difference of more than 10%. This overall pattern is also repeated for all but one of the model-dataset pairs that we tested, the sole exception being Lenet-300-100 on MNIST (Fig. (a)a). For Lenet-300-100, however, \(R\) outperforms \(Q\) at every level of sparsity. This model is different from the other three models that we tested in two ways: 1) it is a fully-connected network (no convolutional layers), and 2) it displays around 98% accuracy on its task, which is considerably higher than any of the other model-dataset pairs. To isolate which of these two properties led to the difference in the sparsity-accuracy trend, we ran pruning experiments on a custom Lenet-12 model (which has a single hidden layer with 12 real neurons), which is also a fully-connected model but with lower accuracy. The results for this model on the MNIST dataset are given in Fig. 5. Here the earlier trend reappears, and \(Q\) performs better than \(R\) at high sparsity levels. Thus Lenet-300-100 showed divergent behavior not because it is a fully-connected network but because it has very high accuracy at its task. A possible explanation for this may be that this model is so over-parameterized that the real model can be pruned to a great extent without a significant drop in accuracy. This explanation, however, cannot justify why \(Q\) underperforms \(R\) beyond the 4% sparsity region for Lenet-300-100. On the whole, this analysis shows that, in general, at extreme model sparsities, quaternion models perform better than their real counterparts. Although quaternion models start out at a disadvantage because of their lower initial accuracy, as the models are reduced in size, their relative performance gap diminishes until at a certain point the real model dips below the quaternion, where it remains for the rest of the pruning process. Figure 5: Training and pruning results for Lenet-12 on MNIST. ### Using Early Stopping In [7], the authors use an early stopping criterion to stop training. The criterion used is that of the minimum validation loss. We repeated our experiments with the same early stopping criterion (with a patience of 10 iterations), but here we saw that \(Q\) no longer did better than \(R\) at any sparsity level for any of the models that we tested. For example, the pruning results for Conv-2 when run with the early stopping criterion is given in Fig. 6, which should be compared with Fig. 4b. ### Larger Models In addition to the models considered earlier, we also ran similar experiments on Resnet-18 [15] and VGG-16 [37]. These are deeper neural network architectures that require batch normalization layers. The first implementation of a quaternion batch normalization algorithm was given by [10]. This implementation treats a quaternion as a single entity and requires complex matrix operations to calculate the mean and variance of each layer, and is thus extremely computationally intensive (adding a single batchnorm layer to Conv-2 increased the time taken for each training iteration by approximately 40 times compared to baseline). Hence we had to opt for another implementation of batch normalization given in [40]. This implementation treats each individual components of quaternions separately, and is hence much faster. This obviously comes at the cost of compromising the relationships between the individual components of quaternions, but since we are already doing this with our use of the split activation function, it may not lead to any further disadvantage. Neither Resnet nor VGG follow the trend we saw for the smaller models, in that for both of these network architectures \(R\) has greater accuracy than \(Q\) at all sparsity levels. Whether it is the introduction of the batchnorm layer or the depth of the architectures that is causing this reversal in trend is unclear and needs to be investigated further. Figure 6: Accuracy vs sparsity results for Conv-2 on CIFAR-10 when using the early stopping criterion. Conclusions In this work, we conduct extensive pruning experiments on real and quaternion-valued implementations of different neural network architectures with the objective of checking whether using quaternions provides any advantages in model compression. We first found that pruned quaternion models can be re-trained from scratch to match the original accuracy of the unpruned model, showing that lottery tickets exist for quaternion networks as well. More importantly, our experiments demonstrate that when pruned to high levels of sparsities, quaternion implementations of certain models outperform their complementary real-valued models of equivalent architectures. Hence for ML tasks with multi-dimensional inputs that need to be run on devices with limited computational power, a pruned quaternion model may be a more suitable option than an analogous real network. ### Limitations and Future Work Like all empirical studies, the primary limitation of this work is in the scope of vision tasks, datasets and models tested. Here we tested six architectures on three different datasets at the task of classification. An extension to this work, and one that could further generalize the conclusions reached, is to consider a larger set of models and datasets, and test them on additional vision tasks such as semantic segmentation. Investigating how pruned quaternion implementations of deeper architectures that require batch normalization layers can be made to outperform their real counterparts is also identified as future work.
2305.14378
Predicting Stock Market Time-Series Data using CNN-LSTM Neural Network Model
Stock market is often important as it represents the ownership claims on businesses. Without sufficient stocks, a company cannot perform well in finance. Predicting a stock market performance of a company is nearly hard because every time the prices of a company stock keeps changing and not constant. So, its complex to determine the stock data. But if the previous performance of a company in stock market is known, then we can track the data and provide predictions to stockholders in order to wisely take decisions on handling the stocks to a company. To handle this, many machine learning models have been invented but they didn't succeed due to many reasons like absence of advanced libraries, inaccuracy of model when made to train with real time data and much more. So, to track the patterns and the features of data, a CNN-LSTM Neural Network can be made. Recently, CNN is now used in Natural Language Processing (NLP) based applications, so by identifying the features from stock data and converting them into tensors, we can obtain the features and then send it to LSTM neural network to find the patterns and thereby predicting the stock market for given period of time. The accuracy of the CNN-LSTM NN model is found to be high even when allowed to train on real-time stock market data. This paper describes about the features of the custom CNN-LSTM model, experiments we made with the model (like training with stock market datasets, performance comparison with other models) and the end product we obtained at final stage.
Aadhitya A, Rajapriya R, Vineetha R S, Anurag M Bagde
2023-05-21T08:00:23Z
http://arxiv.org/abs/2305.14378v1
# Predicting Stock Market time-series data using CNN-LSTM Neural Network model ###### Abstract Stock market is often important as it represents the ownership claims on businesses. Without sufficient stocks, a company cannot perform well in finance. Predicting a stock market performance of a company is nearly hard because every time the prices of a company's stock keeps changing and not constant. So, it's complex to determine the stock data. But if the previous performance of a company in stock market is known, then we can track the data and provide predictions to stockholders in order to wisely take decisions on handling the stocks to a company. To handle this, many machine learning models have been invented but they didn't succeed due to many reasons like absence of advanced libraries, inaccuracy of model when made to train with real time data and much more. So, to track the patterns and the features of data, a CNN-LSTM Neural Network can be made. Recently, CNN is now used in Natural Language Processing (NLP) based applications, so by identifying the features from stock data and converting them into tensors, we can obtain the features and then send it to LSTM neural network to find the patterns and thereby predicting the stock market for given period of time. The accuracy of the CNN-LSTM NN model is found to be high even when allowed to train on real-time stock market data. This paper describes about the features of the custom CNN-LSTM model, experiments we made with the model (like training with stock market datasets, performance comparison with other models) and the end product we obtained at final stage. CNN-LSTM, Deep Learning, Time-Series Prediction ## I Introduction Stock Market (or share market) is a place where corporate folks or businessmen often look on their "shares" or ownership claims related to the company/organisation. Stock market is considered an important place in economy as a country's wealth is decided on it. The hardest task in stock market is predicting it because the data is not constant, always. In early days, many people tried to predict stocks using conventional methods but most of the time, failed. So the probability of predicting stocks correctly is very minimal. Today, Machine Learning (ML) techniques have been implemented to predict the company shares and thereby providing suggestions to stock holders to improvise the company's financial growth. But they are not accurate. This paper focuses on some of the works done in predicting stock market and a new method to follow CNN-LSTM Neural Network model approach to predict data for given time series data. ## II Related Work Before the time of writing this paper, many have proposed and implemented various algorithms in order to predict stock market data. We did a literature survey to find some of the algorithms proposed and found some of the advantages, disadvantages present in those algorithms. Subhadra and Kalyana in [1] conducted analysis in various machine learning methods to predict stock market data and found that Random Forest performs good compared with Linear Regression and other algorithms, however the error percentage rises in the model when the input data is not smoothed in pre-processing stage. Pang, Zhou et al in [2] made comparisons with RNN and LSTM model and conducted experimental analysis, in which LSTM with Auto-Encoder module enabled (AELSTM) predicted most of the data but the implementation is done using old libraries and with real-time stock market data, thus the accuracy of the model was low. This was improved but revealed an important point when a model is made to train with real-time data. Uma and Kotrappa in [3] proposed a new method where LSTM with Log Bilinear layer on top of it. The model predicted most of the stock market data and turned out with high accuracy but it was proposed and not tested with real time data, also it was meant to predict data only during the time of COVID-19 and not beyond that. Kimoto, Yoda, Takeoka in [4] discussed a buying and selling timing prediction system based on modular neural network which converts the technical indexes and economic indexes into a space pattern to input to the neural networks. During the analysis phase, neural network model produced a higher correlation coefficient in comparison to multiple regression. The experiment did by Guresen, Kayakutlu and Daim in [5] evaluates the efficiency of dynamic artificial neural network (DAN2), multi-layer perceptron (MLP), and hybrid neural networks. The paper concludes by stating that the classic ANN model - MLP gives the most reliable results in forecasting time series while Hybrid methods failed to improve the forecast results. Nelson, Pereira and Oliveria in [1] proposed an LSTM network to predict future trends of stock prices in time steps of 15 minutes based on the price history, alongside technical analysis indicators. On average 55.9% accuracy was achieved in predicting whether the price of a particular stock may increase or not shortly in the future. Selvin, Menon, Soman et al in [13] experimented on three different deep learning models, namely CNN, RNN, and LSTM with a sliding window approach. Out of the three, CNN gave more accurate results than the other two models which is due to the reason that CNN uses only the information on the current window for predicting stock price. This allows CNN to understand the dynamic changes and patterns occurring in the current window. Conversely, RNN and LSTM use information from previous lags for predicting future instances. Hiransha, Gopalakrishnan and Soman in [12] conducted experiments to compare different Deep Learning models viz. ANN, MLP, LSTM, and RNN. ANN captured the pattern at the initial stage so did RNN but on reaching a certain time period both failed to identify the pattern. Same was the case with LSTM but CNN seemed to perform better compared to the other three networks even though several time periods showed less accuracy for the predicted values. Bansal, Hasija et al in [14] proposed an Intelligent decentralized Stock market model using the convergence of machine learning alongside DAG-based cryptocurrency. The ML model which is based on LSTM achieved an accuracy of 99.71% in prediction. The feature vector of stock for the company contained 4 parameter values i.e. 'open', 'close', 'low', and 'high' with batch size as 50 for 100 epochs. ## III Proposed Work After we conducted the literature survey, we realised some of the key points to be taken while designing a ML model * The model must be designed in such a way that it should parse real-time stock market data and not just sample data * The data has to be preprocessed correctly in order to avoid errors during training and testing phase * The accuracy of the model not only depends upon it's parameters but also the dataset as well * The only point most of the papers didn't mention is the deployment, as it's one of the most important module while making an ML/DL model Considering the above key points, we focused on making the ML model. We decided to go on with CNN-LSTM Neural Network (Convolutional Neural Network and Long Term Short Memory Neural Network) approach because CNN helps in tracking the features of dataset and LSTM helps in tracking the patterns, allowing to train on them. This approach isn't the first time as some researchers already tried to implement the CNN-LSTM method but we tweaked the parameters, kernel sizes (for CNN) and layers to experiment and test it on real-time data. Since this is a regression type of problem where we had to train with time-series data, we used Mean Square Error as the standard metric rather than accuracy. For the neural network, we first analysed with other works and then decided the architecture in order to maintain novelty. The architecture diagram for the neural network is shown in Fig. 1. **NOTE: For this project, we mainly used Python and Jupyter Notebooks via Colab and Kaggle to perform the experiment. To see the experiment we did, refer [https://github.com/Circle-1/Stock-X](https://github.com/Circle-1/Stock-X)** ## IV Experimental Analysis For the experiment, we used Python for data preprocessing and creating a Neural Network model. For deployment, it's considered as minimal because, we saved the model in HDF5 format using Keras, but in addition to that, we made a Docker Image and Helm charts for people to work on. In this section, the major parts of the project and the experiments performed are described. ### _Data Collection, Analysis and Preprocessing_ Before making a model, the first step is to collect enough datasets such that the base analysis is made to study about the stock market data. So, we gathered enough datasets from Kaggle (explained in later stages) but realised that they are sample ones and we had to search for real time ones. Then, we came across several finance APIs like Yahoo finance, Alpha Vantage which helps in gathering stock data for specific time period. So, we took Alpha Vantage API and used "TIME_SERIES_DAILY" option to obtain stock data of a company ranging since 10 years. We used "full" mode to collect enough data rather than using "compact" mode in API (which fetches only 100 columns meant for rapid usage cases) and we were able to collect the data for any company with valid API keys. Some stock data is also gathered using Google Sheets via "GOOGEFIINANCE" function. Then we stored the data in CSV format for testing phase. Then, we did an Exploratory Data Analysis (EDA) on the dataset to know about the stock market data in depth. We also implemented Moving Average and Daily Return columns to know how a stock market works and analysed some of the features present in it. After that, we went to preprocessing phase. In preprocessing phase, we first cleansed the data by removing NULL values from the dataset and taking the mean of data and replacing it if necessary using Pandas library. Then we took the four columns of any stock market dataset, namely "Open", "Close", "High", "Low". These are the columns which mainly involve in training the dataset especially "Close" column (shown in Fig. 2). The graphs are plotted using the matplotlib and seaborn library in Python. During preprocessing stage, we realised that CNN always considers 2-Dimensional and 3-Dimensional arrays to train and select required features. But here, the data we have is of 1-Dimensional arrays. This is one of the reasons why CNN is often seen in Computer Vision (CV) based applications and not in NLP based applications. So, in order for CNN model to parse the dataset, we made a function where the 1-D arrays are made to convert to [100,1] tensors (precisely, a vector). Tensors are a type of data structures that describe a multilinear relationship between set of objects in a vector space. So, for converting 1-D array to tensor, every 100 rows are taken and from that the mean of the values are calculated and made to store in a separate column. This process is done for entire dataset. In our case, we did this on the "Close" column as it's the main column where we would decide the prediction of the stock data. After this step, we would obtain tensors for CNN side of model to train. Then, we split 80% for training and 20% for testing. Finally, we reshaped the data and sent it to the training phase. ### _Training Phase_ After the dataset is processed, the NN model has to be made. In our case, it's the CNN-LSTM Neural Network model. For our model, we considered to divide the model into two parts, CNN and LSTM. Fig. 1: Architecture for Deep Learning model Fig. 2: Example of stock market dataset #### Iv-A1 Cnn For the CNN section of model, we followed a custom way instead of ascending kind of way in the size of layers. So, we made 3 layers of neuron size 64,128,64 with kernel size=3 along with MaxPooling layers in between. Finally we added a Flatten layer at end of CNN section to convert the tensors back to 1-D array. All CNN layers are added with TimeDistributed function in order to train every temporal slice of input, as we're approaching a Time-Series problem in this case. Then, the processed data is sent to LSTM layers. #### Iv-A2 Lstm For the LSTM section, we made 2 Bi-LSTM layers to detect the features and train them forward & backward. For each layer, the neuron size is 100. Additionally, dropout layers are added in between with value of 0.5 in drop some features for stability. Last, we added a dense layer with linear activation function and at the final layer we used "adam" optimizer ("sgd" also worked in this case but we found "adam" optimizer to be accurate after analysis), Mean Squared Error (mse) as the loss function and "mse" and "mae" as metrics. The architecture for the model is shown in Fig. 4. After the model is trained, we made it to plot the graph for the loss values (both training and validation) and at first it proved to be less and later it varied accordingly (shown in Fig. 6 and 7.). But the loss and mse varies when the model is saved and made to train again with saved parameters (this is fully explained in the testing phase). Then, we managed to refine the test dataset back to arrays using the reshape() function, then made the model to predict the dataset. The graph is plotted and the results proved to be great. The model was able to predict most of the stock data with given time series and it's shown in Fig. 8. Fig. 4: The summary of the proposed model Fig. 5: LSTM Module (Ref: LSTM, Wikipedia) Fig. 3: The architecture of the CNN-LSTM model ### _Testing Phase_ After the model has been trained and the values are noted, we saved the model in HDF5 format using Keras API in TensorFlow library. Then, we loaded the HDF5 file and tried training the model again but this time with the different dataset, we were able to train the model but the loss value varies accordingly. For example, if the loss is 0.055 during training phase, the loss increases to 0.153 (estimated, not accurate). It is also found that this happens depending on the dataset we use, for NIFTY sample dataset the error didn't occur whereas in the NASDAQ dataset it occured while loading up the saved model. We tested and experimented the model with different datasets from Kaggle consisting sample data of mixed content from different stock markets [10], NIFTY-50 [11], NASDAQ and NYSE [12] to find how the model copes up with different stock market (shown in Fig. 8 and Fig. 9). We also did with real time dataset by fetching data from Alpha Vantage API (at first step) and tested the model with stock data both shuffled and un-shuffled. It was able to predict most of the stocks as shown in the figures and in table III. ### _Deployment_ This section is considered as minimal one because we used Kaggle notebooks in order to create and deploy the models but we realised some developers who don't work with Kaggle and prefer local environments and tools like Conda. So, we made a Docker Image for them in order to work with Jupyterlab (instead of Jupyter notebook) with the GitHub repository as the folder. Initially, it was hard because by default, Docker Images are huge in size because of the OS images like Ubuntu. We got the image size as 250MB at first (without installing any Python libraries) and 750MB when uncompressed. We also tried to use pre-defined images for Jupyter notebooks where the libraries are already installed and ready, but we didn't use them as they are of huge size (approximately 3GB). So we added some libraries and made a docker image with Conda environment (via miniconda) on top of it, still we reached 1GB of data due to the size of libraries and scripts bundled so it's considered to be used only when a user doesn't want to use Kaggle as a working environment. For deploying the image to Docker Hub, we used GitHub actions which is a CI tool from GitHub to write a CI action in which an image is published every time a change or release is made, so that users need not build image everytime locally. Docker Inc. already made a GitHub action [13] to perform the above functions, so we used and modified it to build and push Docker images. Similarly, we also made Helm charts if a person needs to run the Docker Image in a Kubernetes cluster. Initially we faced difficulties but after trial and error, we were able to expose the service to the network and run our Docker Image in Kubernetes. The Helm charts are deployed on the Artifact Hub ([https://artifacthub.io](https://artifacthub.io)) and anyone can download the templates from the repository. Fig. 8: The prediction graph for sample NSE/NASDAQ data (shuffled) Fig. 6: The loss graph obtained during training Fig. 7: MAE obtained during training ## V End Product After several works and tests, we were able to obtain a great working CNN-LSTM Neural Network model which can able to predict most of the stocks when the relevant stock market data is provided. In order to check how our model performed, we compared with other models which are obtained during the literature survey and found the results in the table V The model experimented from [3] is proposed one and haven't been tested rigorously. The model from [9] performed well and stated that an average of 0.0003 to 0.0004 due to the fact that the data is processed and decentralized. In our case, we obtained an MSE of 0.001 maximum and 0.035 in average as it varied accordingly with different datasets we experimented. We also experimented our model with different models made from our team* to see how the CNN-LSTM model performed well when compared with other models too. The results were shown in table IV The varying accuracy of the CNN-LSTM model lies on how the data is processed and the order of the data items (shuffled or un-shuffled). In both cases, the model performed pretty fine and was almost able to predict most of the time-series data with minimal error and variance. Fig. 9: The prediction graph for real stock data [IBM] (un-shuffled) ## VI Conclusion Designing a customised CNN-LSTM is bit tricky at first, because there are many algorithms already present to predict the stock data, however not fully optimized. We trained and tested the model with different kinds of datasets like NYSE, NASDAQ and NIFTY and found that accuracy of the model varies accordingly. For NIFTY [11], the model proved to be good and predicted at most 99% of stocks even during the testing stage. But for other datasets like NYSE [12], the accuracy varied during the testing phase and not during training. We believe it may be due to the noise present in the dataset during the testing phase and compiling it again. Also, we obtained MSE of value 0.001 to 0.05 (approx) during training and 0.002 to 0.1 (approx) for validation with different datasets. Overall, the paper presents on how CNN can be combined with LSTM to obtain the features from the tensors processed from dataset and detect patterns based on the features obtained. The paper also presents one of the ways in which stock market can be predicted with great accuracy.
2306.06238
Understanding the Effect of the Long Tail on Neural Network Compression
Network compression is now a mature sub-field of neural network research: over the last decade, significant progress has been made towards reducing the size of models and speeding up inference, while maintaining the classification accuracy. However, many works have observed that focusing on just the overall accuracy can be misguided. E.g., it has been shown that mismatches between the full and compressed models can be biased towards under-represented classes. This raises the important research question, can we achieve network compression while maintaining "semantic equivalence" with the original network? In this work, we study this question in the context of the "long tail" phenomenon in computer vision datasets observed by Feldman, et al. They argue that memorization of certain inputs (appropriately defined) is essential to achieving good generalization. As compression limits the capacity of a network (and hence also its ability to memorize), we study the question: are mismatches between the full and compressed models correlated with the memorized training data? We present positive evidence in this direction for image classification tasks, by considering different base architectures and compression schemes.
Harvey Dam, Vinu Joseph, Aditya Bhaskara, Ganesh Gopalakrishnan, Saurav Muralidharan, Michael Garland
2023-06-09T20:18:05Z
http://arxiv.org/abs/2306.06238v3
# Understanding the Effect of the Long Tail on Neural Network Compression ###### Abstract Network compression is now a mature sub-field of neural network research: over the last decade, significant progress has been made towards reducing the size of models and speeding up inference, while maintaining the classification accuracy. However, many works have observed that focusing on just the overall accuracy can be misguided. E.g., it has been shown that mismatches between the full and compressed models can be biased towards under-represented classes. This raises the important research question, _can we achieve network compression while maintaining "semantic equivalence" with the original network?_ In this work, we study this question in the context of the "long tail" phenomenon in computer vision datasets observed by Feldman (2020). They argue that _memorization_ of certain inputs (appriately defined) is essential to achieving good generalization. As compression limits the capacity of a network (and hence also its ability to memorize), we study the question: are mismatches between the full and compressed models correlated with the memorized training data? We present positive evidence in this direction for image classification tasks, by considering different base architectures and compression schemes. ## 1 Introduction A large body of research has been devoted to developing methods that can reduce the size of deep neural network (DNN) models considerably without affecting the standard metrics such as top-1 accuracy. Despite these advances, there are still _mismatches_ between the models, i.e., inputs that are classified differently by the original and compressed models. Furthermore, there has been evidence that compression can affect the accuracy of certain classes more than others (leading to fairness concerns, e.g., see Hooker et al. (2019); Joseph et al. (2020)). Many works have tried to combat mismatches by using techniques such as reweighting misclassified examples Lin et al. (2017), using multi-part loss functions Joseph et al. (2020) that incorporate ideas from knowledge-distillation Hinton et al. (2015) to induce better _alignment_ between the original and compressed models. Joseph (2021) demonstrated that inducing alignment also improves metrics such as fairness across classes and similarity of attribution maps. In spite of this progress, all the known techniques for model compression result in a non-negligible number of mismatches. This leads to some natural questions, _can we develop a systematic understanding about these mismatches? Are a certain number of mismatches unavoidable and an inherent consequence of underparameterization?_ These questions are important not only for vision models, but also in other domains such as language models Brown et al. (2020), autonomous driving Bojarski et al. (2016), health-care Ravi et al. (2016) and finance Tran & Tran (2020). Our goal in this work is to address these questions using the idea that real datasets have a "long tail" property, as hypothesized in Feldman (2020); Feldman & Zhang (2020). Informally, their thesis is that real datasets have many examples that are "atypical," and unless their labels are _memorized,_ we incur high test loss. Figure 1 shows some such atypical examples. This thesis has a direct implication for network compression: if the compressed network does not have sufficient capacity to memorize the atypical examples, it must have a significant mismatch with the original model! Our goal in this work is to explore this connection by asking, _is there a strong correlation between the mismatches that arise after model compression, and the long-tail portion of the data distribution?_ To answer this question, we first compute the _influence_, as defined by Feldman & Zhang (2020), of each training example on the model's accuracy on test examples, as well as on training examples, in the CIFAR-10 dataset Krizhevsky et al. (2009). We then compress models using a variety of compression algorithms and perform statistical analysis on the influence values and mismatched predictions between the reference model and the compressed model. Our experiments are based on the observation that if training and test sets are drawn from the same distribution, then memorizing highly influential, atypical, training examples improves accuracy on test examples that are similar to them and also atypical. By observing the connection between these highly influenced test examples and a compressed model's misclassified test examples, we can characterize the misclassified examples in terms of memorization. ## 2 Definitions and Methodology ### Measuring Mismatch or Misalignment Comparing DNN models is a well-known challenge in machine learning (ML) systems. In the context of model compression, it is natural to compare models in terms of their "functional" behavior. This leads to the definition of _compression impacted exemplars_ presented below. We remark that other metrics are also very useful, e.g., understanding whether the models "use the same features" for classification; obtaining concrete metrics that can capture such semantic information is an active direction of research. The simplest measure of misalignment among models is in terms of their predictions. Hooker et al. (2019) introduced the notion of **compression impacted exemplars (CIEs)**: test examples that the Figure 1: CIFAR10 examples corresponding to extreme influence values. (a, d) Train and test examples at the most negative influence value, (b, e) at the lowest magnitude influence value, and (c, f) at the most positive influence value. In general, we expect positive influence to correspond to _helpful_ examples, near-zero influence to correspond to unremarkable examples, and negative influence to correspond to unhelpful examples. compressed and uncompressed models classify _differently_. Having a near-zero number of CIEs indicates that the models are "functionally" equivalent. We will focus on the CIEs in our experiments, and also divide them into subtypes. Note that CIEs are more important when the reference model makes the _correct_ (agreeing with ground truth) prediction, because this means that the compressed models get those examples wrong. We denote such examples by _CIE-U_. CIEs that are classified correctly by the compressed model are denoted _CIE-C_. ### Influence and Memorization of Training Data As outlined earlier, Feldman (2020) makes the case that memorization is _necessary_ for achieving close-to-optimal generalization error in real datasets. Specifically, when the data distribution is long-tailed, i.e., when rare and atypical instances make up a significant fraction of the data, memorizing these instances is unavoidable in order to obtain high accuracy. Feldman & Zhang (2020) developed approaches for empirically evaluating this "long tail" theory. The starting point for such evaluation is examining which training examples are memorized and the utility of the memorized examples as a whole. We recall their definitions of _influence_ and _memorization_: For a training algorithm \(\mathcal{A}\) operating on a training dataset \(S=\left((x_{1},y_{1}),\cdots,(x_{|S|},y_{|S|})\right)\) and test dataset \(T=\left((x^{\prime}_{1},y^{\prime}_{1}),\cdots,(x^{\prime}_{|T|},y^{\prime}_{| T|})\right)\) the amount of influence of \((x_{i},y_{i})\in S\) on \((x^{\prime}_{j},y^{\prime}_{j})\in T\) is the difference between the accuracy of classifying \((x^{\prime}_{j},y^{\prime}_{j})\) after training with and without \((x_{i},y_{i})\). Formally, \[\mathtt{inf1}(\mathcal{A},S,i,j):=\Pr_{h\leftarrow\mathcal{A}(S)}\left[h\left( x^{\prime}_{j}\right)=y^{\prime}_{j}\right]\,-\Pr_{h\leftarrow\mathcal{A}(S^{ \wedge i})}\left[h\left(x^{\prime}_{j}\right)=y^{\prime}_{j}\right] \tag{1}\] where \(S^{\setminus i}\) denotes the dataset \(S\) with \((x_{i},y_{i})\) removed and probability is taken over the randomness of the algorithm \(\mathcal{A}\) such as random initialization. Memorization is defined as the influence of training examples on training examples, i.e. when \(S=T\) and \((x_{i},y_{i})=(x^{\prime}_{j},y^{\prime}_{j})\). This definition captures and quantifies the intuition that an algorithm memorizes the label \(y_{i}\) if its prediction at \(x_{i}\) based on the rest of the dataset changes significantly once \((x_{i},y_{i})\) is added to the dataset. Using the definition directly, calculating the influence requires training \(\mathcal{A}(S^{\setminus i})\) on the order of \(1/\sigma^{2}\) times for every example, \(\sigma^{2}\) being the variance of the estimation. As a result, this approach requires \(\Omega(|S|/\sigma^{2})\) training runs which translates into millions of training runs needed to achieve \(\sigma\leq 0.1\) on a dataset with \(|S|=50,000\) examples. To avoid this blow-up, Feldman & Zhang (2020) provide Figure 2: We collect the estimated influence of each training example on each test example is collected in a matrix \(I\), shaped (number of test examples \(\times\) number of training examples). The \((i,j)\)th element is the estimated influence of training example \(j\) on test example \(i\). an estimation algorithm that uses only \(O(1/\sigma^{2})\) training steps in total. We adopt their algorithm in our experiments. ## 3 Experiments Although individual influence values are bounded in \([-1,1]\), influence values on examples from natural data tend to be roughly normally distributed and close to zero (Figure 3). Furthermore, there are usually much fewer CIEs than non-CIEs in a test set. Thus, using the Student's t-test to compare the influences of different sets of examples is reasonable. Using the estimated influence and memorization of CIFAR-10 training examples from Feldman (2020), along with the CIEs that resulted from compression, we compared the mean influence on CIEs versus those on examples that were not CIEs. A t-test was done for each loss functions and CIE types. These experiments are summarized in Algorithm 1, in which \(\odot\) is element-wise multiplication and \(\odot\) is element-wise division. We implemented Algorithm 1 in Python using PyTorch 1.13 and estimated the influence and memorization of CIFAR10 examples in DenseNet, ResNet-20, ResNet-56, and ResNext-20 over 100 trials each. For each trial, we masked out a random 30% of the training set and trained for 300 epochs, after which we evaluated correctness on the model using parameters that achieved the highest test accuracy. We compressed ResNet-56 using parameterized loss functions introduced in Joseph et al. (2020) and identified the CIEs that resulted from each loss function. Our main results are in Figure 4, which shows the test statistics that resulted from performing the t-test using the baseline loss function in Group Sparsity and using the parameterized loss functions in Joseph et al. (2020). (We report CIE counts in Figure 6.) Compressing while using any loss Figure 3: Histograms of estimated mean influence on CIFAR10 test examples across training examples, i.e. the distribution of \((\sum_{j}I_{i,j})\odot|S|\). Note the logarithmic vertical scale. Almost all training examples have close to 0 influence across the test set. The minority that influences the test set at all do not affect accuracy more than about \(10^{-4}\). function tested resulted in some positive test statistic, with two loss functions resulting in p-value \(\leq 0.05\). One of those, SoftAdapt Heydari et al. (2019), also happened to achieve the lowest number of CIEs. In general, this indicates that CIEs tend to be unusually highly influenced, and expected to be similar to highly influential, atypical training examples. ## 4 Conclusions This paper has presented a novel method for understanding the residual impact of label preserving neural network compression. We have discovered that the label mismatches of the compressed model occurred where the classification accuracy of data points were most affected by the presence or absence of training examples. With this initial understanding in place, we envision a number of directions to expand on this idea by confirming this behavior on larger datasets such as ImageNet Deng et al. (2009) and extending this to large language models' OpenAI (2021) memorization of irrelevant text data such as social security numbers. We intend to extend this analysis to help debug compression (sparsity and quantization) induced classification mismatches in privacy preserving inference frameworks that we have recently developed Gouert et al. (2023b;a). Figure 4: T-test results between the mean difference between influence on CIEs and on non-CIEs. Tests that were significant with p-value \(\leq 0.05\) are marked with *. The t-statistic reflects the number of CIEs that resulted from compression, with the loss functions that resulted in fewer CIEs exhibiting less influence magnitude among them.
2304.11883
Recurrent neural network based parameter estimation of Hawkes model on high-frequency financial data
This study examines the use of a recurrent neural network for estimating the parameters of a Hawkes model based on high-frequency financial data, and subsequently, for computing volatility. Neural networks have shown promising results in various fields, and interest in finance is also growing. Our approach demonstrates significantly faster computational performance compared to traditional maximum likelihood estimation methods while yielding comparable accuracy in both simulation and empirical studies. Furthermore, we demonstrate the application of this method for real-time volatility measurement, enabling the continuous estimation of financial volatility as new price data keeps coming from the market.
Kyungsub Lee
2023-04-24T07:51:11Z
http://arxiv.org/abs/2304.11883v1
Recurrent neural network based parameter estimation of Hawkes model on high-frequency financial data ###### Abstract This study examines the use of a recurrent neural network for estimating the parameters of a Hawkes model based on high-frequency financial data, and subsequently, for computing volatility. Neural networks have shown promising results in various fields, and interest in finance is also growing. Our approach demonstrates significantly faster computational performance compared to traditional maximum likelihood estimation methods while yielding comparable accuracy in both simulation and empirical studies. Furthermore, we demonstrate the application of this method for real-time volatility measurement, enabling the continuous estimation of financial volatility as new price data keeps coming from the market. ## 1 Introduction We propose a real-time estimation and volatility measurement scheme. Specifically, we continuously estimate the financial model parameters and volatility in real-time as new price process become available during the market operation. Our method uses a recurrent neural network to estimate the parameters of a Hawkes model based on high-frequency financial data, and these estimates are subsequently used to compute volatility. This approach exhibits significantly faster computational performance compared to the traditional maximum likelihood estimation (MLE) method. The Hawkes process introduced by Hawkes (1971) is a type of point process used to model the occurrence of events over time. It is frequently utilized in finance to model the arrival of trades or other financial events in the market; here, we use it to describe the fluctuations in the price in the tick structure. As a two dimensional Hawkes process with symmetric kernel in our study, it is characterized by two key features: self-excitation and mutual-excitation (Bacry et al., 2013). Self-excitation means that the occurrence of an event increases the likelihood of the same types of events occurring, while mutual-excitation implies that the occurrence of one event can increase the likelihood occurrence of other types of events. For the estimation, we use a long short-term memory (LSTM) network introduced by Hochreiter and Schmidhuber (1997). This is a type of recurrent neural network that is capable of learning long-term dependencies in data. LSTMs can be used for a variety of tasks that involve sequential data, including language modeling, machine translation, speech recognition, time series based on historical data, and financial analysis (Zhang et al., 2021; Ghosh et al., 2022). They are particularly useful for tasks where capturing long-term dependencies is important, as the gating mechanism allows the network to retain important earlier information in the sequence while discarding irrelevant or outdated information. Thus, our method uses a neural network for parameter estimation. Similar attempts have been made in recent years in various fields (Wlas et al., 2008; Wang et al., 2022; Wei and Jiang, 2022), but there is still limited research on using neural networks for estimation in financial time series. Specifically, we use a direct parameter estimation approach in which the neural network is trained to directly predict the model parameters based on the observed data. This method is an example of a small aspect of network-based parameter estimation and expected to be applied to more complex models. ## 2 Method ### Price model First, we explain the stochastic model to describe high-frequency stock price movements. We model the up and down movements of the tick-level price process as a marked Hawkes process, which captures both the timing and size of the movements. This process is defined by the random measures, \[\mathbf{M}(\mathrm{d}u\times\mathrm{d}z)=\begin{bmatrix}M_{1}(\mathrm{d}u\times \mathrm{d}z_{1})\\ M_{2}(\mathrm{d}u\times\mathrm{d}z_{2})\end{bmatrix} \tag{1}\] in the product space of time and jump size, \(\mathbb{R}\times E_{i}\), for \(i=1,2\), where \(E_{i}=\mathbb{N}\times\{i\}\) denotes the space of mark (jump) sizes for up and down price movements, respectively. Each measure in Eq. (1) is associated with a sequence of \(E_{i}\)-valued random variables \(\{Z_{i,n}\}\) in addition to the sequence of random times \(\{\tau_{i,n}\}\) for each \(i\). That is, \[M_{i}(\mathrm{d}u\times\mathrm{d}z_{i})=\sum_{n}\delta_{\tau_{i,n},Z_{i,n}}( \mathrm{d}u\times\mathrm{d}z_{i})\] with the Dirac measure \(\delta\), which is defined as follows: for any time interval \(I\) and \(A_{i}\subset E_{i}\) \[\delta_{\tau_{i,n},Z_{i,n}}(I\times A_{i})=\left\{\begin{array}{ll}1,\text{ if }\tau_{i,n}\in I\text{ and }Z_{i,n}\in A_{i},\\ 0,\text{ otherwise.}\end{array}\right.\] A vector of cadlag counting processes is defined by \[\mathbf{N}_{t}=\begin{bmatrix}N_{1}(t)\\ N_{2}(t)\end{bmatrix}=\int_{(0,t]\times E}\mathrm{D}\mathrm{g}(z)\mathbf{M}( \mathrm{d}u\times\mathrm{d}z),\quad\mathrm{D}\mathrm{g}(z)=\begin{bmatrix}z_{ 1}&0\\ 0&z_{2}\end{bmatrix},\quad E=E_{1}\cup E_{2}\] which counts the number of events weighted by their size, that is, \[N_{i}(t)=N_{i}((0,t])=\sum_{n}Z_{i,n}\mathbbm{1}_{\{0<\tau_{i,n}\leq t\}}=\# \text{ of }\tau_{i,n}\in(0,t],\quad\text{for }i=1,2.\] **Assumption 1**.: The stochastic intensity \(\lambda_{i}\) for \(N_{i}\) is represented by the following: \[\mathbf{\lambda}_{t}=\begin{bmatrix}\lambda_{1}(t)\\ \lambda_{2}(t)\end{bmatrix}=\mathbf{\mu}+\int_{(-\infty,t]\times E}\mathbf{\alpha} \circ\mathbf{b}(t-u)\mathbf{M}(\mathrm{d}u\times\mathrm{d}z) \tag{2}\] where \(\mathbf{\mu}\) is a \(2\times 1\) positive constant base intensity vector, \(\mathbf{\alpha}\) is a positive \(2\times 2\) constant matrix. \(\mathbf{h}\) is a decay function matrix and \(\circ\) denotes the element-wise product. From the definition of Eq. (2), for simplicity, we assume that the future impact of an event on intensities is independent of jump size, as the integrand of the equation does not contain the jump variable \(z\). In addition, for further parsimony, we assume that: \[\mathbf{\mu}=\begin{bmatrix}\mu\\ \mu\end{bmatrix},\quad\mathbf{\alpha}=\begin{bmatrix}\alpha_{1}&\alpha_{2}\\ \alpha_{2}&\alpha_{1}\end{bmatrix},\quad\mathbf{b}(t)=\begin{bmatrix}\mathrm{e}^{- \beta t}&\mathrm{e}^{-\beta t}\\ \mathrm{e}^{-\beta t}&\mathrm{e}^{-\beta t}\end{bmatrix}. \tag{3}\] Hence, the set of parameters to be estimated is \(\{\mu,\alpha_{1},\alpha_{2},\beta\}\). We also assume that the mark \(Z_{i}\) at time \(t\) is independent from the \(\sigma\)-algebra generated by \((N_{j}(s),\lambda_{j}(s))_{s<t}\) for \(j=1,2\). The intensity process is assumed to be stationary and that the spectral radius of \(|\int_{0}^{\infty}\mathbf{\alpha}\circ\mathbf{b}(t)\mathrm{d}t|\) is less than \(1\). Under this assumption, the Hawkes volatility of price movements - the standard deviation of total up and down net movements - is represented by \[\mathrm{SD}(N_{1}(t)-N_{2}(t))=\sqrt{\mathbf{u}^{\top}\left[\mathcal{T}\left\{ \overline{\mathbf{Z}}\circ\mathbf{B}\right\}+\overline{\mathbf{Z}}^{(2)}\circ \mathrm{Dg}(\mathbb{E}[\mathbf{\lambda}_{t}])\right]\mathbf{u}t},\quad\mathbf{u}= \begin{bmatrix}1\\ -1\end{bmatrix} \tag{4}\] where \(\mathcal{T}\) is an operator such that \(\mathcal{T}(\mathbf{M})=\mathbf{M}+\mathbf{M}^{\top}\) for a square matrix \(\mathbf{M}\) and \(\mathrm{Dg}(\cdot)\) denotes a diagonal matrix whose diagonal entry is composed of the argument. Furthermore, \[\mathbb{E}[\mathbf{\lambda}_{t}]=(\mathbf{\beta}-\mathbf{\alpha})^{-1}\mathbf{\beta}\mathbf{\mu}, \quad\mathbf{\beta}=\begin{bmatrix}\beta&0\\ 0&\beta\end{bmatrix} \tag{5}\] and \[\mathbb{E}[\mathbf{\lambda}_{t}\mathbf{\lambda}_{t}^{\top}]=(\mathbf{\beta}-\mathbf{\alpha})^ {-1}\left(\frac{1}{2}\mathbf{\alpha}\mathrm{Dg}(\mathbb{E}[\mathbf{\lambda}_{t}])\mathbf{ \alpha}+\mathbf{\beta}\mathbf{\mu}\mathbb{E}[\mathbf{\lambda}_{t}^{\top}]\right) \tag{6}\] and \[\mathbf{B}=\left\{\overline{\mathbf{Z}}^{\top}\circ\mathbb{E}[\mathbf{\lambda}_{t }\mathbf{\lambda}_{t}^{\top}]+\mathrm{Dg}(\mathbb{E}[\mathbf{\lambda}_{t}])\left(\mathbf{ \alpha}\circ\overline{\mathbf{Z}}\right)^{\top}-\mathrm{Dg}(\overline{\mathbf{ Z}})\mathbb{E}[\mathbf{\lambda}_{t}]\mathbb{E}[\mathbf{\lambda}_{t}]^{\top} \right\}(\mathbf{\beta}-\mathbf{\alpha})^{-1} \tag{7}\] and by the mark independent assumption, \[\overline{\mathbf{Z}}=\begin{bmatrix}\mathbb{E}[Z_{1}]&\mathbb{E}[Z_{2}]\\ \mathbb{E}[Z_{1}]&\mathbb{E}[Z_{2}]\end{bmatrix},\quad\overline{\mathbf{Z}}= \begin{bmatrix}\mathbb{E}[Z_{1}^{2}]&\mathbb{E}[Z_{2}^{2}]\\ \mathbb{E}[Z_{1}^{2}]&\mathbb{E}[Z_{2}^{2}]\end{bmatrix}.\] To calculate the volatility of price changes, rather than the number of movements, we multiply the minimum tick size to Eq. (4). Further details can be found in Lee and Seo (2017) and Lee (2022). ### Network model Next, we construct a recurrent neural network for parameter estimation. The traditional method of estimating the parameters of a Hawkes process is MLE, which involves maximizing the log-likelihood function of the model: \[L(T,\mathbf{\theta})=\sum_{i=1}^{2}\left(\int_{(0,T]\times E}\log\lambda_{i}(u)M_{i }(\mathrm{d}u\times\mathrm{d}z_{i})-\int_{0}^{T}\lambda_{i}(u)\mathrm{d}u\right)\] to estimate the parameters most likely to generate the observed data. In contrast, neural network based parameter estimation involves training a neural network to predict the parameters of a Hawkes process based on input data. To do this, numerous sample paths of inter-arrival times and movement types (up or down) are generated, where the parameters of each path are determined randomly. These sample paths are then used as feature variables, and the associated true parameter values are used as the target variables. The neural network is trained on these data. Once trained, it can be used to predict the parameter values of a new sample path of Hawkes process data. These predicted parameter values can then be used to compute further complicated formulae, such as the Hawkes volatility in Eq. (4), which is a measure of the variability of the process over time. Here, we use an LSTM model with three layers as the neural network. The LSTM network is known for its capability of retaining information over a long duration, making it appropriate for tasks that require context comprehension or state preservation. The gates mechanism in the LSTM architecture that regulates the flow of information between memory cells and the network enables the network to choose which information should be preserved or discarded. We also tested gated recurrent unit networks (Cho et al., 2014); however, the LSTM performed slightly better in our problem. A thorough account of the network's implementation is presented in the following section. ## 3 Simulation result In this simulation study, we generate a set of paths of Hawkes processes to create a training dataset for the neural network. The dataset comprises a sufficient quantity of synthetic data, which are utilized to train the network to predict the real parameters of the Hawkes process. The real parameters for each Hawkes process are randomly selected to cover the entire range of possible values, with each path having distinct parameters. For the ranges of the parameters, we use the ranges of the estimates obtained by fitting the past intraday price process of various stocks to the Hawkes model. Approximately 30 symbols of stocks, including AAPL, AMZN, C, FB, GOOG, IBM, MCD, MSFT, NVDA, and XOM from 2018 to 2019 are used. For the estimation, we focus on high-frequency rather than ultra-high-frequency. More precisely, the raw data are filtered as follows. We observe the mid-price at intervals of \(\Delta t=0.1\) seconds, noting any changes from the previously observed price. If a change is detected, we record the exact time of the change and the new price. If the price remains the same, we move on to the next interval of 0.1 seconds and repeat the process. This method allows us to filter out unnecessary movements, commonly referred to as microstructure noise, observed at ultra-high frequencies. Once the set of estimates is obtained, we generate Hawkes process paths of a 2,000-time step. These are then used for neural network training together with estimates as target variables. This method yields tens of thousands of datasets, which are sufficient to construct a comprehensive training set. The implementation of the LSTM network model is as follows. The first layer consists of 12 units, which manage sequential data. These data are a two-dimensional input of time series data that comprise inter-arrivals and types of movements (up or down). Up and down movements are encoded as 1 and 2, respectively. The output of this layer is a sequence of 12 length of vectors, where each vector is the output of the first layer at a given time step. Thus, if the original time series has a 2,000-time step, then the output of the first layer is a \(2,000\times 12\) matrix, which is the time step \(\times\) the number of units. The second layer has 12 units and produces a single (not a sequence of) vector of length 12. The final layer is a dense (fully connected) layer with four units, which produces output representing the parameters in the Hawkes model, \(\mu,\alpha_{1},\alpha_{2}\) and \(\beta\). If we extend the model for more complexity, the number of units in the last dense layer will be adjusted accordingly. This is because each unit in the last layer represents each parameter. As pointed out by Mei and Eisner (2017), a natural extension of LSTM is deep LSTM, especially for complex structured data such as high-frequency financial data, where the effectiveness of multi-layering seems promising. However, we utilized a relatively parsimonious Hawkes model and achieved sufficient performance without employing a large number of layers, so we used the LSTM model proposed above, which is relatively faster to train. If the data structure and model become more complex, an extension to deep LSTM would be helpful. For training, we use 75,000 sample data points; hence, the dataset for the neural network's input has a shape of \(75,000\times 1,000\times 2\). The Adam (Kingma and Ba, 2014) optimizer is used for training. Generally, more than 300 epochs are used. For testing, we use 15,000 data points that were not used for training. This is done to evaluate the model's ability to make predictions on these unseen data based on mean squared error (MSE). We then compare the results with a traditional MLE. The computation times were measured using a typical commercial PC. The result shows that the MLE has slightly better performance in terms of MSE; however, the neural network also shows a reasonable result. Meanwhile, the general numerical method of MLE requires many iterations and is time consuming. However, a well-trained neural network computes an estimate very quickly, which is less than a hundredth of the time required by MLE. \begin{tabular}{c c c} \hline & Neural network & MLE \\ \hline MSE & 0.0513 & 0.0417 \\ Time (sec) & 0.0120 & 1.763 \\ \hline \end{tabular} To understand the basic properties of the neural network estimator, we investigate its sampling distribution. To examine the sampling distribution, we generate 100,000 paths of length 2,000 using the fixed parameter values of \(\mu=0.3\), \(\alpha_{1}=0.4\), \(\alpha_{2}=0.7\), \(\beta=1.5\). We then compare the obtained sampling distributions using the neural network and MLE methods. Overall, MLE outperforms the neural network slightly. Even so, the general performance of neural networks is also quite good. \begin{tabular}{c c c c c c} \hline Parameter & True & Neural network & \multicolumn{2}{c}{MLE} \\ \hline & & Mean & S.D. & Mean & S.D. \\ \hline \(\mu\) & 0.3000 & 0.3151 & 0.0358 & 0.3036 & 0.0314 \\ \(\alpha_{1}\) & 0.4000 & 0.4429 & 0.0661 & 0.3988 & 0.0500 \\ \(\alpha_{2}\) & 0.7000 & 0.5733 & 0.0697 & 0.7024 & 0.0608 \\ \(\beta\) & 1.5000 & 1.5736 & 0.1364 & 1.5078 & 0.1145 \\ \hline \end{tabular} Specifically, the aforementioned example compares the performance of the numerical optimizer (used in MLE) and neural network. Owing to the nature of simulation studies, the numerical optimizer has several advantages. The outcome of a numerical optimizer is often influenced by its initial value. In the example provided above, because the true value of the parameter is known, it was directly used as the initial value. This may have improved the performance compared to the result from a random initial value. As finding a good initial value is sometimes challenging, a numerical optimizer may perform worse in real-world problems. In addition, if the numerical optimizer exhibits unexpected behavior, such as failure to converge appropriately, human engagement may be necessary, such as adjustments to the initial value or retrying the procedure. As the complexity of the model and level of noise present in the empirical data increase, the advantages of the numerical optimizer may decrease. In such cases, further research may be required to determine whether the numerical optimizer still outperforms the neural network. ## 4 Empirical result The approach in the previous section can be directly applied to empirical data. However, we need to consider whether robust estimation can be made in situations where the empirical data do not completely follow the Hawkes process. For example, in filtered high-frequency price process data, a subdue effect (where an event reduces intensity) can sometimes occur. This can result in negative estimates for the parameter \(\alpha\), which violates the definition of the Hawkes model. To address this, a more complex model should be used; however, as this falls outside the scope of this study, an alternative method is to use a softplus activation function \(\log(1+\exp(x))\) for the last layer in the neural network. This approach is similar to constraint optimization by ensuring positive estimates. Furthermore, instead of predicting \(\beta\) directly, we trained and predicted \(\beta-\alpha_{1}-\alpha_{2}\). By using the softplus function, this method ensures that the branching ratio condition of the Hawkes model is met. To further increase robustness, a combination of empirical data and its maximum likelihood estimates as training data can be used, rather than relying solely on simulation data. This approach accounts for the possibility of model mis-specification; for instance, the observed data may not perfectly align with the Hawkes process. By incorporating the MLE into the training data, the neural network can better mimic the MLE of the Hawkes model. Thus, if the goal is to construct a neural network that closely approximates the MLE, even under the possibility of model mis-specification, this method can be effective. The following section explains the step-by-step procedure. We select segments of observed intraday data of inter-arrivals and movement types. Each segment consists of a 2,000-time step. These selected paths are used to fit the Hawkes model using MLE. The resulting dataset is then used to train the neural network, where the inter-arrivals and types of real data serve as feature variables and the maximum likelihood estimates are the target variables. Figure 1 illustrates the intraday dynamics of estimates of the Hawkes model on a specific date. The data used for this illustration are out-of-sample that are not used for training. Specifically, it was estimated using segments of data corresponding to every 2,000-time step. This corresponds to a time horizon of approximately 10-20 minutes. To create a more continuous graph, the time windows for estimation were moved forward slowly with sufficient overlap. The neural network shows very consistent results with MLE. Figure 2 presents the instantaneous intraday annualized Hawkes volatility calculated using MLE, a neural network, and nonparametric realized volatility as a benchmark by Andersen et al. (2003). The realized volatility is calculated using the observed values at 1-second intervals for the price process of the period. All three measures have a similar trend throughout the day. Although it is not possible to present all the results examined, in some cases, MLE showed unstable dynamics of volatility. This is likely due to the fact that the 2,000-time length used for estimation is a relatively small sample size to estimate the parameters of the Hawkes model. ## 5 Conclusion This study shows that a neural network can accurately estimate time series parameters, with an accuracy similar to MLE and much faster computation. While the example used here is for calculating Hawkes volatility, but our proposed method can be applied to various fields. It can be particularly useful in cases where the model is complex and traditional estimation procedures are challenging, such as modeling entire limit order book. Further research in this area is expected to be ongoing and diverse. Figure 1: Intraday estimates for the NBBO of AAPL using MLE and neural network ## Acknowledgements This work has supported by the National Research Foundation of Korea(NRF) grant funded by the Korea government(MSIT)(No. NRF-2021R1C1C1007692).
2306.02738
A Large-Scale Study of Probabilistic Calibration in Neural Network Regression
Accurate probabilistic predictions are essential for optimal decision making. While neural network miscalibration has been studied primarily in classification, we investigate this in the less-explored domain of regression. We conduct the largest empirical study to date to assess the probabilistic calibration of neural networks. We also analyze the performance of recalibration, conformal, and regularization methods to enhance probabilistic calibration. Additionally, we introduce novel differentiable recalibration and regularization methods, uncovering new insights into their effectiveness. Our findings reveal that regularization methods offer a favorable tradeoff between calibration and sharpness. Post-hoc methods exhibit superior probabilistic calibration, which we attribute to the finite-sample coverage guarantee of conformal prediction. Furthermore, we demonstrate that quantile recalibration can be considered as a specific case of conformal prediction. Our study is fully reproducible and implemented in a common code base for fair comparisons.
Victor Dheur, Souhaib Ben Taieb
2023-06-05T09:33:39Z
http://arxiv.org/abs/2306.02738v2
# A Large-Scale Study of Probabilistic Calibration ###### Abstract Accurate probabilistic predictions are essential for optimal decision making. While neural network miscalibration has been studied primarily in classification, we investigate this in the less-explored domain of regression. We conduct the largest empirical study to date to assess the probabilistic calibration of neural networks. We also analyze the performance of recalibration, conformal, and regularization methods to enhance probabilistic calibration. Additionally, we introduce novel differentiable recalibration and regularization methods, uncovering new insights into their effectiveness. Our findings reveal that regularization methods offer a favorable tradeoff between calibration and sharpness. Post-hoc methods exhibit superior probabilistic calibration, which we attribute to the finite-sample coverage guarantee of conformal prediction. Furthermore, we demonstrate that quantile recalibration can be considered as a specific case of conformal prediction. Our study is fully reproducible and implemented in a common code base for fair comparisons. Machine Learning, Probabilistic Calibration, Probabilistic Calibration ## 1 Introduction Neural network predictions affect critical decisions in many applications, including medical diagnostics and autonomous driving (Gulshan et al., 2016; Guizilini et al., 2020). However, effective decision making often requires accurate probabilistic predictions (Gawlikowski et al., 2021; Abdar et al., 2021). For example, consider a probabilistic regression model that produces 90% prediction intervals. An important property would be that 90% of these prediction intervals contain the realizations. For models that output a predictive distribution, _probabilistic calibration_ is an important property that states that all quantiles must be calibrated, i.e., the frequency of realizations below these quantiles must match the corresponding quantile level. Additionally, predictive distributions should be sufficiently sharp (i.e., concentrated around the realizations) and leverage the information in the inputs. In the classification setting, Guo et al. (2017) found that common neural architectures trained on image and text data were miscalibrated, sparking increased interest in neural network calibration. In a follow-up study, Minderer et al. (2021) showed that more recent neural architectures demonstrate improved calibration. However, there has been less research on calibration for neural probabilistic regression models compared to classification. Therefore, it remains uncertain whether the same results apply to the regression setting. This paper addresses this gap by conducting a comprehensive study on probabilistic calibration for regression using tabular data. We explore various calibration methods, including quantile recalibration (Kuleshov, Fenner, et al., 2018) and conformalized quantile regression (Romano, Patterson, et al., 2019). We also consider regularization methods, which have been shown to perform well in the classification setting (Karandikar et al., 2021; Popordanoska et al., 2022; Yoon et al., 2023). We make the following main contributions: 1. We conduct the largest empirical study to date on probabilistic calibration of neural regression models using 57 tabular datasets (Sections 4 and 6). We consider multiple state-of-the-art calibration methods (Section 5), including post-hoc recalibration, conformal prediction, and regularization methods, with various scoring rules and predictive models. 2. Building on quantile recalibration, we propose a new differentiable calibration map using kernel density estimation, which provides improved negative log-likelihood compared to baselines. We also introduce two new regularization objectives based on the probabilistic calibration error (Section 5). 3. We show that quantile recalibration is a special case of conformal prediction, providing an explanation for their superior performance in terms of probabilistic calibration (Section 6). ## 2 Background We consider a univariate regression problem where the target variable \(Y\in\mathcal{Y}\) depends on an input variable \(X\in\mathcal{X}\), with \(\mathcal{Y}=\mathbb{R}\) representing the target space and \(\mathcal{X}\) representing the input space. Our objective is to approximate the conditional distribution \(P_{Y|X}\) using training data \(\mathcal{D}=\{\,(X_{i},Y_{i})\,\}_{i=1}^{N}\) where \((X_{i},Y_{i})\stackrel{{\text{i.i.d.}}}{{\sim}}P\equiv P_{X}\times P _{Y|X}\). A probabilistic predictor \(F_{\theta}:\mathcal{X}\rightarrow\mathcal{F}\) is a function parametrized by \(\theta\in\Theta\) that maps an input \(x\in\mathcal{X}\) to a predictive cumulative distribution function (CDF) \(F_{\theta}(\cdot\mid x)\) in the space \(\mathcal{F}\) of distributions over \(\mathbb{R}\). Additionally, given \(x\in\mathcal{X}\), we denote the predictive quantile function (QF) by \(Q_{\theta}(\cdot\mid x)\), and probability density function (PDF) by \(f_{\theta}(\cdot\mid x)\). Similarly, the marginal CDF, QF, or PDF of a random variable \(R\) is denoted by \(F_{R}\), \(Q_{R}\), or \(f_{R}\), respectively. Probabilistic calibration.Given an input \(x\in\mathcal{X}\), the model \(F_{\theta}\) is ideal if it precisely matches the conditional distribution \(P_{Y|X}\). However, learning the ideal model based on finite data is not possible without additional (strong) assumptions (Foygel Barber et al., 2021). To avoid additional assumptions, we can instead enforce certain desirable properties that are attainable in practice and that a good or ideal forecaster should exhibit. One such property is probabilistic calibration. Let \(Z=F_{\theta}(Y\mid X)\in[0,1]\) denote the probability integral transform (PIT) of \(Y\) conditional on \(X\). The model \(F_{\theta}\) is _probabilistically calibrated_ (also known as PIT-calibrated) if \(\forall\alpha\in[0,1]\), \[F_{Z}(\alpha)\doteq\text{Pr}(Z\leq\alpha)=\alpha. \tag{1}\] Let \(U\in[0,1]\) be a uniform random variable independent of \(Z\). The left and right hand sides of (1) can be interpreted as the CDF of \(Z\) and \(U\), respectively, as a function of \(\alpha\). This shows that the uniformity of the PIT is equivalent to probabilistic calibration (Dawid, 1984). Since the ideal forecaster is probabilistically calibrated, we can require this property from any competent forecaster. However, probabilistic calibration, though necessary, is not sufficient for making accurate probabilistic predictions. Additionally, as discussed by Gneiting and Resin (2021), probabilistic calibration primarily addresses unconditional aspects of predictive performance and is implied by more robust conditional notions of calibration, such as auto-calibration. Probabilistic calibration error.The most common approach for evaluating probabilistic calibration is to consider distances of the form \(\int_{0}^{1}|F_{Z}(\alpha)-F_{U}(\alpha)|^{p}d\alpha\) where \(p>0\). The particular cases of \(p=1\) and \(p=2\) are known as the 1-Wasserstein distance and Cramer-von Mises distance, respectively. We denote the empirical CDF of the PIT as \(\hat{F}_{Z}(\alpha)=\frac{1}{N}\sum_{i=1}^{N}\mathbb{1}(Z_{i}\leq\alpha)\) where \(Z_{i}=F_{\theta}(Y_{i}\mid X_{i})\) are PIT realizations. A common approach to assess probabilistic calibration using Monte Carlo estimation is to evaluate it at equidistant values \(\alpha_{1}<\dots<\alpha_{M}\) as follows: \[\text{PCE}_{p}(F_{\theta},\mathcal{D})=\frac{1}{M}\sum_{j=1}^{M}\left|\alpha_ {j}-\hat{F}_{Z}(\alpha_{j})\right|^{p}. \tag{2}\] This metric has been previously employed in literature such as Zhao et al. (2020) and Zhou et al. (2021) with \(p=1\), and Kuleshov, Fenner, et al. (2018) and Utpala and Rai (2020) with \(p=2\). It is important to note that, unlike the classical definition of the \(p\)-norm, we do not exponentiate \(\frac{1}{p}\) in (2) to maintain consistency with prior literature. In the subsequent sections, we focus our analysis on \(\text{PCE}_{1}\) and use the abbreviation PCE for brevity. One limitation of scalar metrics like PCE is their inability to provide detailed information regarding calibration errors at individual quantile levels, \(\alpha_{1},\dots,\alpha_{M}\). Instead, PIT reliability diagrams offer a visual assessment of probabilistic calibration across all quantile levels by plotting the empirical CDF of the PIT \(Z\). These diagrams display the right side of (1) against its left side, with a perfectly calibrated model represented by a diagonal line (in the asymptotic case). Figure 2 provides examples of such reliability diagrams, which have been employed in studies by Pinson and Hagedorn (2012) and Kuleshov, Fenner, et al. (2018). ## 3 Related Work **Post-hoc calibration approaches** involve adjusting the predictions of a trained model using a mapping learned from a separate calibration dataset. In the context of classification, temperature scaling (Guo et al., 2017) is a simple and effective method that adjusts predictive confidence while maintaining accuracy. For regression tasks, quantile recalibration (Kuleshov, Fenner, et al., 2018) aims to achieve probabilistic calibration. Conformal prediction (Vovk et al., 2020) is a general approach that provides prediction sets with a finite-sample coverage guarantee. Notable methods applied with deep learning include Conformal Quantile Regression (Romano, Patterson, et al., 2019) and Distributional Conformal Prediction (Izbicki et al., 2020; Chernozhukov et al., 2021). Furthermore, post-hoc approaches have also been proposed for conditional notions of calibration (Song et al., 2019; Kuleshov and Deshpande, 2022). **Regularization approaches** aim to improve calibration during training by incorporating regularization techniques. Some methods, proposed by Zhao et al. (2020) and Feldman et al. (2021), utilize regularization to target different conditional notions of calibration based on the inputs. Zhou et al. (2021) introduced an alternative loss function involving the simultaneous training of two neural networks, while Pearce et al. (2018), Chung et al. (2021), and Thiagarajan et al. (2020) proposed objectives that allow control over the tradeoff between coverage and sharpness of prediction intervals. To our knowledge, the only regularization objective specifically targeting probabilistic calibration is quantile regularization (Utpala and Rai, 2020). Other types of uncertainty quantification methods include ensembling (Lakshminarayanan, Pritzel, et al., 2017) and Bayesian methods (Jospin et al., 2022) ## 4 Are Neural Regression Models Probabilistically Calibrated? We conduct an extensive empirical study to evaluate the probabilistic calibration of neural regression models. To this end, we calculate the _probabilistic calibration error_ defined in (2) for various state-of-the-art models across multiple benchmark datasets. **Benchmark datasets**. We analyze a total of 57 datasets, including 27 from the OpenML curated benchmark (Grinsztajn et al., 2022), 18 from the AutoML Repository (Gijsbers et al., 2019), and 12 from the UCI Machine Learning Repository (Dua and Graff, 2017). These datasets are widely used in the evaluation of deep probabilistic models and uncertainty quantification, as evidenced by previous studies such as Fakoor et al. (2021), Chung et al. (2021), Zhou et al. (2021), Utpala and Rai (2020), and Gal and Ghahramani (2016). Figure 1 provides an overview of the utilization of these datasets in previous studies. To the best of our knowledge, our study represents the most comprehensive assessment of probabilistic calibration for neural regression models published to date. **Neural probabilistic regression models**. We consider three state-of-the-art neural probabilistic regression models. The first model predicts a parametric distribution, where the parameters are obtained as outputs of a hypernetwork. Previous studies have often focused on the Gaussian distribution (Lakshminarayanan, Pritzel, et al., 2017; Utpala and Rai, 2020; Zhao et al., 2020). To introduce more flexibility, we consider a mixture of \(K\) Gaussian distributions. Given an input \(x\in\mathcal{X}\), the hypernetwork parametrizes the means \(\mu_{k}(x)\), standard deviations \(\sigma_{k}(x)\), and weights \(w_{k}(x)\) for each component \(k=1,...,K\). To ensure positive standard deviations and that the mixture weights form a discrete probability distribution, we use the Softplus and Softmax activations, respectively. We have two variants of this model depending on the scoring rule used for training: the negative log-likelihood (NLL) or the continuous ranked probability score (CRPS). These models are denoted as MIX-NLL and MIX-CRPS, respectively. It is worth noting that the CRPS of a mixture of Gaussians has a closed-form expression (Grimit et al., 2006). The second model predicts quantiles of the distribution (Tagasovska and Lopez-Paz, 2019; Chung et al., 2021; Feldman et al., 2021). Specifically, given an input \(x\in\mathcal{X}\) and a quantile level \(\alpha\in[0,1]\), the model outputs a quantile \(Q_{\theta}(\alpha\mid x)\). The full quantile function can be obtained by evaluating the model at multiple quantile levels. The model is trained by minimizing the quantile score at multiple levels, which is asymptotically equivalent to minimizing the CRPS (Bracher et al., 2021). We denote this model as SQR-CRPS, where SQR stands for simultaneous quantile regression (Tagasovska and Lopez-Paz, 2019). **Experimental setup**. We adopt the large-sized regime introduced by Grinsztajn et al. (2022), which involves truncating the datasets to a maximum of 50,000 examples. Among the 57 datasets, the number of examples ranges from 135 to 50,000, and the number of features ranges from 3 to 3,6111. Each of the 57 datasets is divided into four sets: training (65%), validation (10%), calibration (15%), and test (10%). We normalize the input \(X\) and target \(Y\) using Figure 1: Multiple regression benchmark datasets with references. Datasets inside parentheses have not been considered in this study. Full dataset names are available in Table 1. the mean and standard deviation from the training split. The final predictions are then transformed back to the original scale. For our neural network models, we employ the same fully-connected architecture as previous studies conducted by Kuleshov, Fenner, et al. (2018), Chung et al. (2021), and Fakoor et al. (2021). Further details regarding the model hyperparameters can be found in Appendix C. **Results.** In Figure 2, the first row displays the PCE (averaged over five random train-validation-test splits) for MIX-NLL in blue on each of the 57 datasets. For comparison, the PCE of a perfectly calibrated model, i.e. with uniformly distributed PITs, computed using \(5\times 10^{4}\) simulated values is shown in orange. The second row presents reliability diagrams for five datasets, with 90% consistency bands as in Gneiting, Wolffram, et al. (2023). Similar information is provided for MIX-CRPS and SQR-CRPS in Figures 12 and 13 in Appendix B.4, respectively. Additionally, reliability diagrams for all datasets can be found in Figure 15 in Appendix B.6. The analysis reveals that the (average) PCE is generally high across many datasets, although there are significant variations between datasets. To test the statistical significance of these results, \(10^{4}\) samples were generated from the sampling distribution of the average PCE under the null hypothesis of probabilistic calibration. The resulting sampling distribution for all datasets is presented in Appendix B.5. By computing the p-value associated with a one-sided test in the upper tail of the distribution (as illustrated in Appendix B.5), it was observed that most datasets have a p-value of zero. This indicates that the average PCE obtained for the considered model is higher than all the simulated average PCEs of the probabilistically calibrated model. Applying a threshold of 0.01 and a Holm correction for the 57 hypothesis tests, the null hypothesis is rejected for 11 datasets out of the 57. Overall, the results indicate that the neural models considered in this study are generally probabilistically mis-calibrated on a significant number of benchmark tabular datasets. In Section 6, we will further explore how calibration methods can substantially improve the PCE of neural models. ## 5 Calibration Methods We begin by discussing the three main approaches to calibration: quantile recalibration, conformal prediction, and regularization-based calibration. Following that, we introduce two novel variants of regularization-based calibration. Quantile recalibration and conformal prediction are post-hoc methods, meaning they are applied after model training. These approaches utilize a separate calibration dataset \(\mathcal{D}^{\prime}=\{\ (X_{i}^{\prime},Y_{i}^{\prime})\ \}_{i=1}^{N^{\prime}}\), where \((X_{i}^{\prime},Y_{i}^{\prime})\stackrel{{\text{i.i.d.}}}{{\sim}}P _{X,Y}\). On the other hand, regularization-based calibration operates directly during training and relies solely on the training data \(\mathcal{D}\). ### Quantile Recalibration Quantile recalibration aims to transform a potentially mis-calibrated CDF \(F_{\theta}\) into a probabilistically calibrated CDF \(F_{\theta}^{\prime}=F_{Z}\circ F_{\theta}\), using the calibration map \(F_{Z}\) which repre Figure 2: The top row shows the PCE for different datasets with one standard error (error bar). The bottom row gives examples of PIT reliability diagrams for five datasets. sents the CDF of the PITs for \(F_{\theta}\). For a given quantile level \(\alpha\in[0,1]\), the recalibrated CDF \(F^{\prime}_{\theta}\) satisfies: \[\text{Pr}(F^{\prime}_{\theta}(Y\mid X)\leq\alpha) =\text{Pr}(F_{\theta}(Y\mid X)\leq Q_{Z}(\alpha)) \tag{3}\] \[=F_{Z}(Q_{Z}(\alpha))\] (4) \[=\alpha. \tag{5}\] In practice, \(F_{Z}\) is not directly available and needs to be estimated from data. Kuleshov et al. (2018) proposed estimating it using isotonic regression, while Utpala and Rai (2020) showed that computing the empirical CDF is an equivalent and simpler method. Specifically, given a set of PIT values \(Z^{\prime}_{i}=F_{\theta}(Y^{\prime}_{i}\mid X^{\prime}_{i}),i=1,\ldots,N^{\prime}\), the calibration map \(\phi^{\text{EMP}}\) is computed as: \[\phi^{\text{EMP}}(\alpha;\{\,Z^{\prime}_{i}\}_{i=1}^{N^{\prime}})=\frac{1}{N^ {\prime}}\sum_{i=1}^{N^{\prime}}\mathbbm{1}(Z^{\prime}_{i}\leq\alpha), \tag{6}\] where \(\alpha\in[0,1]\). Similarly to Utpala and Rai (2020), we also consider a linear calibration map \(\phi^{\text{LIN}}\), which is continuous, and corresponds to a linear interpolation between the points \(\Big{\{}\;(0,0),(Z^{\prime}_{(1)},\nicefrac{{1}}{{N^{\prime}+1}}),\ldots,(Z^{ \prime}_{(N^{\prime})},\nicefrac{{N^{\prime}}}{{N^{\prime}+1}}),(1,1)\; \Big{\}}\), where \(Z^{\prime}_{(k)}\) is the \(k\)th order statistic of \(Z^{\prime}_{1},\ldots,Z^{\prime}_{N^{\prime}}\). In addition, we propose a calibration map based on kernel density estimation (KDE), denoted as \(\phi^{\text{KDE}}\). This calibration map offers the advantage of being differentiable and can lead to improved NLL performance. The key idea is to use a relaxed approximation of the indicator function, which allows us to make the PIT CDF (6) differentiable. Specifically, we compute \[\mathbbm{1}_{\tau}(a\leq b)=\sigma(\tau(b-a))\approx\mathbbm{1}(a\leq b),\] where \(\tau>0\) is an hyperparameter and \(\sigma(x)=\frac{1}{1+e^{-x}}\) denotes the sigmoid function. The resulting smoothed empirical CDF is given by \[\phi^{\text{KDE}}(\alpha;\{\,Z^{\prime}_{i}\,\}_{i=1}^{N^{\prime}})=\frac{1}{ N^{\prime}}\sum_{i=1}^{N^{\prime}}\mathbbm{1}_{\tau}(Z^{\prime}_{i}\leq\alpha). \tag{7}\] This corresponds to estimating the CDF \(F_{Z}\) using KDE based on \(N^{\prime}\) realizations of \(Z\) (\(\{\,Z^{\prime}_{i}\,\}_{i=1}^{N^{\prime}}\)). Since \(\sigma\) is the CDF of the logistic distribution, we use the PDF of the logistic distribution as the kernel in the KDE. Algorithm 1 summarises this method. ``` Input: Predictive CDF \(F_{\theta}\) and \(\mathcal{D}^{\prime}=\{\;(X^{\prime}_{i},Y^{\prime}_{i})\;\}_{i=1}^{N^{\prime}}\). Compute \(Z^{\prime}_{i}=F_{\theta}(Y^{\prime}_{i}\mid X^{\prime}_{i})\quad(i=1,\ldots,N ^{\prime})\) Compute the calibration map \(\phi\), either \(\phi^{\text{EMP}}\), \(\phi^{\text{LIN}}\), or \(\phi^{\text{KDE}}\) Return: Recalibrated CDF \(F^{\prime}_{\theta}=\phi\circ F_{\theta}\). ``` **Algorithm 1** Quantile recalibration ### Conformal Prediction Let us assume the realizations of our calibration dataset \(\mathcal{D}^{\prime}\) are drawn exchangeably from \(P_{X,Y}\)2. Given a predictive model \(M_{\theta}\) and a coverage level \(\alpha\in[0,1]\), (inductive) conformal prediction allows us to construct a prediction set \(C_{\alpha}(X)\subseteq\mathcal{Y}\) for any input \(X\), satisfying the property: Footnote 2: This is implied by the common i.i.d. assumption. \[\text{Pr}(Y\in C_{\alpha}(X)) =\frac{\lceil(N^{\prime}+1)\alpha\rceil}{N^{\prime}+1} \tag{8}\] \[\approx\alpha. \tag{9}\] Conformal prediction achieves this by utilizing a conformity score \(s_{\theta}(Y\mid X)\), which intuitively quantifies the similarity between new samples and previously observed samples. When the conformity score increases with \(Y\), an interval \(C_{\alpha}(X)=(-\infty,s_{\theta}^{-1}(\alpha\mid X)]\) can be constructed, ensuring the conformal guarantee (8) at level \(\alpha\). Let \(Q^{\prime}_{\theta}(\alpha\mid X)=s_{\theta}^{-1}(\alpha\mid X)\) represent the (revised) model obtained through conformal prediction from \(Q_{\theta}(\alpha\mid X)\). Under the assumption that \(Q^{\prime}_{\theta}\) is continuous and strictly increasing, the conformal guarantee implies that \(\text{Pr}(Y\leq Q^{\prime}_{\theta}(\alpha\mid X))\approx\alpha\), which indicates approximate probabilistic calibration at level \(\alpha\). Conformalized Quantile Regression (Romano et al., 2019) is an example of a conformal procedure, where the conformity score is defined as \(s_{\theta}(Y\mid X)=Y-Q_{\theta}(\alpha\mid X)\), representing the quantile residual. Another example is Distributional Conformal Prediction (Izbicki et al., 2020; Chernozhukov et al., 2021), which employs the conformity score \(s_{\theta}(Y\mid X)=F_{\theta}(Y\mid X)\), referring to the PIT. Algorithm 2 provides a summary of how to compute calibrated quantiles using inductive conformal prediction. ``` Input: Trained model \(M_{\theta}\), \(\mathcal{D}^{\prime}=\{\;(X^{\prime}_{i},Y^{\prime}_{i})\;\}_{i=1}^{N^{\prime}}\), strictly increasing conformity score \(s\), quantile level \(\alpha\in[0,1]\), input \(X\). Compute \(S_{i}=s_{\theta}(Y^{\prime}_{i}\mid X^{\prime}_{i})\quad(i=1,\ldots,N^{\prime})\) Compute \(\hat{q}=S_{(\lceil(N^{+}1)\alpha\rceil)}\) where \(S_{(k)}\) denote the \(k\)th smallest value among \(\{\;S_{1},\ldots,S_{N^{\prime}},+\infty\;\}\) Return: Calibrated quantile \(Q^{\prime}_{\theta}(\alpha\mid X)=s_{\theta}^{-1}(\hat{q}\mid X)\) ``` **Algorithm 2** Calibrated quantiles with conformal prediction ### Regularization-based Calibration Regularization-based calibration methods aim to enhance model calibration by incorporating a regularization term into the training objective. Although widely used in classification, there are relatively fewer methods specifically designed for regression problems. In this section, we discuss two approaches: quantile regularization (Utpala and Rai, 2020) and the truncation method (Chung et al., 2021). The main steps of regularization-based calibration are summarized in Algorithm 3. ``` Input: Model \(M_{\theta}\), calibration regularizer \(\mathcal{R}(\theta)\) and tuning parameter \(\lambda\geq 0\). Compute \(\theta^{*}=\arg\min_{\theta\in\Theta}\mathcal{L}^{{}^{\prime}}(\theta;\mathcal{D})\) where \(\mathcal{L}^{{}^{\prime}}(\theta;\mathcal{D})=\nicefrac{{1}}{{N}}\sum_{i=1}^{N }\mathcal{L}(M_{\theta}(\cdot\mid X_{i}),Y_{i})+\lambda\mathcal{R}(\theta; \mathcal{D})\) Return: Regularized model \(M_{\theta}\). ``` **Algorithm 3** Regularization-based calibration #### 5.3.1 Quantile Regularization The regularizer proposed by Utpala and Rai (2020) aims to measure the deviation of the PIT variable \(Z\) from a uniform distribution, which is characteristic of a probabilistically calibrated model. This regularization penalty encourages the selection of calibrated models during training. The authors observed that the KL divergence between \(Z\) and a uniform random variable is equivalent to the negative differential entropy of \(Z\), denoted as \(H(Z)\). To approximate \(H(Z)\), they employed sample-spacing entropy estimation (Vasicek, 1976), resulting in the following regularizer: \[\mathcal{R}_{\text{QR}}(\theta;\mathcal{D}) \tag{10}\] \[=\frac{1}{N-k}\sum_{i=1}^{N-k}\log\left[\frac{N+1}{k}(Z_{(i+k)}-Z _{(i)})\right]\] (11) \[\approx H(Z), \tag{12}\] where \(k\) is a hyperparameter satisfying \(1\leq k\leq N\), and \(Z_{(i)}\) represents the \(i\)th order statistic of \(Z\). To ensure differentiability during optimization, the authors employed a differentiable relaxation technique called NeuralSort (Grover et al., 2019), as sorting is a non-differentiable operation. #### 5.3.2 Truncation-based Calibration The regularization approach introduced by Chung et al. (2021), that we denote Trunc, involves truncating the predictive distribution based on the current level of calibration. Given a quantile model \(Q_{\theta}\), let \(\hat{F}_{Z}(\alpha)=\frac{1}{N}\sum_{i=1}^{N}\mathbbm{1}(Y_{i}\leq Q_{\theta}( \alpha\mid X_{i}))\) be the estimated PIT CDF evaluated at \(\alpha\) and \(\rho(x,y)=(y-x)\mathbbm{1}(x<y)\). The regularization objective for level \(\alpha\) is defined as follows: \[\mathcal{R}_{\text{Trunc}}(\theta;\mathcal{D},\alpha) \tag{13}\] \[=\begin{cases}\frac{1}{N}\sum_{i=1}^{N}\rho(Q_{\theta}(\alpha \mid X_{i}),Y_{i})&\text{if }\hat{F}_{Z}(\alpha)<\alpha\\ \frac{1}{N}\sum_{i=1}^{N}\rho(Y_{i},Q_{\theta}(\alpha\mid X_{i}))&\text{ otherwise}\end{cases} \tag{14}\] This regularization objective adjusts \(\hat{F}_{Z}(\alpha)\) to match \(\alpha\) by increasing it when \(\hat{F}_{Z}(\alpha)<\alpha\), and vice versa. The final regularization objective is computed by averaging \(\mathcal{R}_{\text{Trunc}}(\theta;\mathcal{D},\alpha)\) over multiple quantile levels \(\{\,\alpha_{j}\,\}_{j=1}^{M}\): \[\mathcal{R}_{\text{Trunc}}(\theta;\mathcal{D})=\frac{1}{M}\sum_{j=1}^{M} \mathcal{R}_{\text{trunc}}(\theta;\mathcal{D},\alpha_{j}). \tag{15}\] It is worth noting that Chung et al. (2021) combine the previous regularization objective with a sharpness objective that penalizes the path between the quantile predictions, given by \(\frac{1}{M}\sum_{j=1}^{M}\frac{1}{N}\sum_{i=1}^{N}|Q_{\theta}(\alpha_{j}\mid X _{i})-Q_{\theta}(1-\alpha_{j}\mid X_{i})|\). Instead, we combine it with a strictly proper scoring rule. ### New Regularization-based Calibration Methods Building upon the quantile calibration method discussed in Section 5.3.1, we propose two new regularization objectives which compute a differentiable PCE\({}_{p}\) using alternative statistical distances. The first approach, named PCE-KDE, leverages the differentiable calibration map \(\phi^{\text{KDE}}\) (7) based on KDE. Given a set of quantile levels \(\{\,\alpha_{j}\,\}_{j=1}^{M}\), the regularization objective is given by \[\mathcal{R}_{\text{PCE-KDE}}(\theta;\mathcal{D})=\frac{1}{M}\sum_{j=1}^{M} \left|\alpha_{j}-\phi^{\text{KDE}}(\alpha_{j};\{\,Z_{i}\,\}_{i=1}^{N})\right| ^{p}, \tag{16}\] where \(p>0\). Note that \(\mathcal{R}_{\text{PCE-KDE}}\) reduces to PCE\({}_{p}\) in (2) when \(\tau\) in (7) goes to \(\infty\). The second approach considers distances of the form \(\int_{0}^{1}|Q_{Z}(\alpha)-Q_{U}(\alpha)|^{p}d\alpha\), where \(Q_{Z}\) and \(Q_{U}\) denote the quantile functions of the true and uniform distributions, respectively. When \(p=1\), this distance reduces to the 1-Wasserstein distance, equivalent to \(\int_{0}^{1}|F_{Z}(\alpha)-F_{U}(\alpha)|d\alpha\), which aligns with PCE (see Proposition 1 in Appendix A.1). By exploiting the fact that \(\mathbb{E}\left[F_{Z}(Z_{(i)})\right]=\nicefrac{{i}}{{N+1}}\), we approximate \(Q_{Z}(\nicefrac{{i}}{{N+1}})\) using the \(i\)-th order statistic \(Z_{(i)}\). The regularization objective is given by \[\mathcal{R}_{\text{PCE-Sort}}(\theta;\mathcal{D})=\frac{1}{N}\sum_{i=1}^{N} \left|Z_{(i)}-\frac{i}{N+1}\right|^{p}, \tag{17}\] where \(p>0\). Differentiable relaxations to sorting, such as those proposed by Blondel et al. (2020) and Cuturi et al. (2019), can be employed to obtain the order statistics. ## 6 A Comparative Study of Probabilistic Calibration Methods In continuation of the empirical study described in Section 4, we now proceed to evaluate the performance of the probabilistic calibration methods outlined in the previous section. Specifically, we apply eight distinct calibration methods to the three neural regression models introduced in Section 4. These methods are evenly divided into two categories: post-hoc methods and regularization-based methods. To assess the effectiveness of these calibration methods, we employ four different evaluation metrics. The evaluation is conducted on a set of 57 datasets, utilizing the same experimental setup detailed in Section 4. To ensure a fair and consistent comparison, all the methods have been implemented within a unified codebase3. Footnote 3: The code can be accessed at the following GitHub repository: [https://github.com/Vekteur/probabilistic-calibration-study](https://github.com/Vekteur/probabilistic-calibration-study) ### Experimental Setup Base probabilistic models and calibration methods.We consider the three probabilistic models presented in Section 4, namely MIX-NLL, MIX-CRPS, and SQR-CRPS. For the MIX models, when applying quantile recalibration, we transform the CDF using the empirical CDF estimator (Rec-EMP), the linear estimator (Rec-LIN), or the KDE estimator (Rec-KDE). For SQR-CRPS, we transform multiple quantiles using conformalized quantile regression (CQR). For the three models, we consider the four regularization objectives presented in Sections 5.3 and 5.4 (with \(p=1\)), namely \(\mathcal{R}_{\text{PCE-KDE}}\) (PCE-KDE), \(\mathcal{R}_{\text{PCE-Sort}}\) (PCE-Sort), \(\mathcal{R}_{\text{QR}}\) (QR), and \(\mathcal{R}_{\text{Trune}}\) (Trunc). PCE-Sort is only shown in the Appendix because it performs similarly to PCE-KDE. Metrics.We measure the accuracy of the probabilistic predictions using NLL and CRPS. For the SQR model, we estimate CRPS by averaging the quantile score at 64 equidistant quantile levels. Probabilistic calibration is measured using PCE, defined in (1). Finally, we measure sharpness using the mean standard deviation of the predictive distributions, denoted by STD. Hyperparameters.In our experiments, MIX-NLL and MIX-CRPS output a mixture of 3 Gaussians, and SQR-CRPS outputs 64 quantiles. We justify the choice of these hyperparameters in Appendix C. The hyperparameter \(\tau\) of Rec-KDE and PCE-KDE is fixed at 100, which was found to perform well empirically. For regularization methods, an important hyperparameter is the regularization factor \(\lambda\). As previously observed in classification (Karandikar et al., 2021), we found that higher values of \(\lambda\) tend to improve calibration but worsen NLL, CRPS, and STD. Karandikar et al. (2021) proposed to limit the loss in accuracy by a maximum of 1%. We adopt a similar strategy by selecting \(\lambda\) which minimizes PCE with a maximum increase in CRPS of 10% in the validation set. For each dataset, we select \(\lambda\) in the set \(\{0,0.01,0.05,0.2,1,5\}\), which corresponds to various degrees of calibration regularization. Comparison of multiple models over many datasets.As in Karandikar et al. (2021), and since NLL, CRPS and STD have different scales across datasets, we report Cohen's d, which is a standardized effect size comparing the mean of one method (over 5 runs, in our case) against a baseline. Values of \(-0.8\) and \(-2\) are considered large and huge, respectively. Due to the heterogeneity of the datasets that we consider, the performance of our models can vary widely across datasets. To visualize the results, we show the distribution of Cohen's d using letter-value plots (Hofmann et al., 2011), which indicate the quantiles at levels \(\nicefrac{{1}}{{8}}\), \(\nicefrac{{1}}{{4}}\), \(\nicefrac{{1}}{{2}}\), \(\nicefrac{{3}}{{4}}\) and \(\nicefrac{{7}}{{8}}\), as well as outliers. A median value below zero indicates that the model improved the metric on more than half the datasets. In order to assess whether significant differences exist between different methods, we follow the recommendations of Ismail Fawaz et al. (2019), which are based on (Demsar, 2006). First, we test for a significant difference among model performances using the Friedman test (Friedman, 1940). Then, we use the pairwise post-hoc analysis recommended by Benavoli et al. (2016) using a Wilcoxon signed-rank test (Wilcoxon, 1945) with Holm's alpha correction (Holm, 1979). The results of this procedure are shown using a critical difference diagram (Demsar, 2006). The lower the rank (further to the right), the better performance of a model. A thick horizontal line shows a group of models whose performance is not significantly different, with a significance level of 0.05. ### Results Figure 3 shows the letter-values plots for the Cohen's d of PCE (top panel) as well as the associated critical diagram (bottom panel), for all methods and datasets. The reference model is MIX-NLL. The results with other models as reference are available in Appendix B.1. Blue, green, and red colors are used for the post-hoc methods, the regularization-based methods, and the base models, respectively. The same information is given in Figures 4 and 5 for the CRPS and the NLL, respectively. Comparison of PCE.As expected, Figure 3 shows that the PCE of calibration methods is improved compared to the base models. Furthermore, independently of the base model, we can see that post-hoc methods achieve significantly better PCE than regularization methods. When comparing PCE-KDE with QR, we can see that there is a significantly larger decrease in PCE with the MIX-CRPS base model compared to MIX-NLL. Finally, both PCE-KDE and Trunc decrease PCE for SQR-CRPS, without a significant difference between them. Comparison of CRPS.While post-hoc methods outperform regularization methods in terms of PCE, Figure 4 shows they have a higher CRPS (except for the SQR base model). This can be explained by the fact that regularization methods prevent the CRPS from increasing exceedingly due to the selection criterion for \(\lambda\). Comparison of NLL.Figure 5 shows the importance of the calibration map. In fact, quantile recalibration with a linear map significantly increases the NLL, while smooth interpolation decreases PCE without a large increase in NLL. Note that we only consider MIX models since we cannot compute the NLL for SQR. On the choice of a calibration method.If probabilistic calibration is critical to the application, our experiments suggest that post-hoc methods such as quantile recalibration and conformal prediction should be preferred. However, when we also want to control the CRPS or the NLL, regularization methods can offer a better trade-off in terms of calibration and sharpness. In fact, as shown in Figure 6 in Appendix B.1, when the base model is MIX-NLL, all regularization methods provide a significant improvement in probabilistic calibration without deteriorating the CRPS, NLL or STD. For the MIX-CRPS model, Figure 7 shows that QR has limited impact on CRPS and NLL, while providing better calibration. For the SQR-CRPS base model, Figure 8 shows that the SQR-CRPS + CQR conformal method significantly outperforms the Trunc and PCE-KDE regularization methods both in terms of PCE and CRPS. Overall, Appendix B.1 suggests that MIX-NLL + PCE-KDE, MIX-CRPS + QR and SQR-CRPS + CQR are good choices for practitioners aiming to improve PCE without significantly impacting other aspects of the conditional distribution. Finally, since both regularization and post-hoc methods are able to improve calibration, we investigate whether a combination of these two methods can lead to better performance. Figure 9 in Appendix B.2 shows that such a combination does not significantly improve probabilistic calibration, with an increase in CRPS and NLL. This indicates that practitioners should exercise caution when applying regularization to a model that is already well-calibrated. ### Link between Quantile Recalibration and Conformal Prediction Conformal prediction methods are well-known for their finite-sample coverage guarantee. Interestingly, a specific implementation of quantile recalibration can be considered a special case of conformal prediction. This implies that quantile recalibration can also provide a finite-sample coverage guarantee. This observation could potentially explain why both methods, conformal prediction and quantile recalibration, are effective in improving probabilistic calibration. **Theorem 1**.: _Quantile recalibration is equivalent to Distributional Conformal Prediction (DCP) of left intervals at each coverage level \(\alpha\in[0,1]\). The equivalence is obtained Figure 4: Comparison of CRPS with multiple base losses and calibration methods. Figure 3: Comparison of PCE with multiple base losses and calibration methods. _when the estimator of the calibration map is defined by a slightly different estimator than the conventional one in (6), namely \(\phi_{\text{DCP}}(\alpha)=\frac{1}{N^{\prime}+1}\sum_{i=1}^{N^{\prime}}\mathbbm{1} \left(Z_{i}^{\prime}\leq\alpha\right)\)._ Proof.: Given a predictive distribution \(F_{\theta}\) learned from a training dataset \(\mathcal{D}=\{\left(X_{i},Y_{i}\right)\}_{i=1}^{N}\) where \(\left(X_{i},Y_{i}\right)\stackrel{{\text{i.i.d.}}}{{\sim}}P_{X,Y}\), let \(Z_{i}^{\prime}=F_{\theta}(Y_{i}^{\prime}\mid X_{i}^{\prime})\) represent the PIT values computed on a separate calibration dataset \(\mathcal{D}^{\prime}=\{\left(X_{i}^{\prime},Y_{i}^{\prime}\right)\}_{i=1}^{N^ {\prime}}\), where \(\left(X_{i}^{\prime},Y_{i}^{\prime}\right)\stackrel{{\text{i.i.d.}}}{{ \sim}}P_{X,Y}\). In the DCP approach, as outlined in Algorithm 2, the conformal scores are given by the PIT values \(Z_{i}^{\prime}\). DCP first computes the \(\alpha\) empirical quantile of the scores as \(\hat{q}=Z_{(\lceil(N^{\prime}+1)\alpha\rceil)}^{\prime}\), where \(Z_{(k)}^{\prime}\) represents the \(k\)th smallest value among \(\{\left.Z_{1}^{\prime},\ldots,Z_{N^{\prime}}^{\prime},+\infty\right.\}\). Then, the conformalized quantile is computed as \(Q_{\theta}^{\prime}(\alpha\mid X)=Q_{\theta}(\hat{q}\mid X)\), which corresponds to conformal prediction with coverage \(\alpha\) for the left interval \((-\infty,Q_{\theta}^{\prime}(\alpha\mid X)]\). Let us consider quantile recalibration with the calibration map \(\phi\) in Algorithm 1 given by \(\phi_{\text{DCP}}(\alpha)=\frac{1}{N^{\prime}+1}\sum_{i=1}^{N^{\prime}} \mathbbm{1}(Z_{i}^{\prime}\leq\alpha)\). It computes a recalibrated CDF \(F_{\theta}^{\prime}\) by composing the original CDF \(F_{\theta}\) with \(\phi_{\text{DCP}}\), yielding \(F_{\theta}^{\prime}(y\mid X)=\phi_{\text{DCP}}(F_{\theta}(y\mid X))\). We observe that \(\phi_{\text{DCP}}\) is the CDF of a discrete random variable, with \(\phi_{\text{DCP}}^{-1}(\alpha)=Z_{(\lceil(N^{\prime}+1)\alpha\rceil)}^{\prime}\) representing its empirical quantile function. Furthermore, the composition \(\phi_{\text{DCP}}\circ F_{\theta}(\cdot\mid X)\) acts as the inverse function of \(Q_{\theta}(\cdot\mid X)\circ\phi_{\text{DCP}}^{-1}\). As a result, both the DCP approach and quantile recalibration yield QFs and CDFs that correspond to the same underlying distribution. Quantile recalibration with other recalibration maps (e.g., \(\phi^{\text{EMP}}\), \(\phi^{\text{LIN}}\), or \(\phi^{\text{KDE}}\)) would correspond to DCP where the empirical quantile \(\hat{q}\) is selected using other strategies which does not provide the exact conformal guarantee (8). ## 7 Conclusion The observation that neural network classifiers tend to be miscalibrated (Guo et al., 2017) has prompted the development of various approaches for calibrating these models. In this paper, we present the largest empirical study conducted to date on the probabilistic calibration of neural regression models. Our study provides valuable insights into their performance and the selection of calibration methods. Notably, we introduce a novel differentiable calibration map based on kernel density estimation for quantile recalibration, as well as two novel regularization objectives derived from the PCE. Our study reveals that regularization methods can provide a favorable tradeoff between calibration and sharpness. However, post-hoc methods demonstrate superior performance in terms of PCE. We attribute this finding to the finite-sample coverage guarantee offered by conformal prediction and demonstrate that quantile recalibration can be viewed as a specific case of conformal prediction. Future investigations may extend the study of probabilistic calibration to other models, such as tree-based models, and explore alternative notions of calibration (Gneiting and Resin, 2021). Notably, distribution calibration represents a promising direction, as it has inspired the development of calibration methods (Song et al., 2019; Kuleshov and Deshpande, 2022). ## Acknowledgement This work was supported by the Fonds de la Recherche Scientifique - FNRS under Grants T.0011.21 and J.0011.20.
2309.02195
Sparse Function-space Representation of Neural Networks
Deep neural networks (NNs) are known to lack uncertainty estimates and struggle to incorporate new data. We present a method that mitigates these issues by converting NNs from weight space to function space, via a dual parameterization. Importantly, the dual parameterization enables us to formulate a sparse representation that captures information from the entire data set. This offers a compact and principled way of capturing uncertainty and enables us to incorporate new data without retraining whilst retaining predictive performance. We provide proof-of-concept demonstrations with the proposed approach for quantifying uncertainty in supervised learning on UCI benchmark tasks.
Aidan Scannell, Riccardo Mereu, Paul Chang, Ella Tamir, Joni Pajarinen, Arno Solin
2023-09-05T12:56:35Z
http://arxiv.org/abs/2309.02195v1
# Sparse Function-space Representation of Neural Networks ###### Abstract Deep neural networks (NNs) are known to lack uncertainty estimates and struggle to incorporate new data. We present a method that mitigates these issues by converting NNs from weight space to function space, via a dual parameterization. Importantly, the dual parameterization enables us to formulate a sparse representation that captures information from the entire data set. This offers a compact and principled way of capturing uncertainty and enables us to incorporate new data without retraining whilst maintaining predictive performance. We provide proof-of-concept demonstrations with the proposed approach for quantifying uncertainty in supervised learning on UCI benchmark tasks. Machine Learning, ICML 2023 Workshop on Duality for Modern Machine Learning, Honolulu, Hawaii, USA. Copyright 2023 by the author(s). Machine Learning, ICML 2023 Workshop on Duality for Modern Machine Learning, Honolulu, Hawaii, USA. Copyright 2023 by the author(s). ## 1 Introduction Deep learning (Goodfellow et al., 2016) has become the cornerstone of contemporary artificial intelligence, proving remarkably effective in tackling supervised and unsupervised learning tasks in the _large data_, _offline_, and _gradient-based training_ regime. Despite its success, gradient-based learning techniques exhibit limitations. Firstly, how can we efficiently quantify uncertainty without resorting to expensive and hard-to-interpret sampling in the model's weight space? Secondly, how to update the weights of an already trained model with new batches of data without compromising the performance on past data? These questions become central when used for sequential learning, such as continual learning (CL, Parisi et al., 2019; De Lange et al., 2021), Bayesian optimization (BO, Garnett, 2023) and reinforcement learning (RL, Sutton and Barto, 2018). Recent techniques (_e.g._, Ritter et al., 2018; Khan et al., 2019; Daxberger et al., 2021; Fortuin et al., 2021; Immer et al., 2021a) apply a Laplace generalized Gauss-Newton (GGN) approximation to convert trained NNs into Bayesian neural networks (BNNs), that can provide uncertainty without sacrificing additional resources to training (Foong et al., 2019). Furthermore, the resultant weight-space posterior can be converted to function space as shown in Khan et al. (2019); Immer et al. (2021b). The function-space representation allows for a principled mathematical approach for analyzing its behaviour (Cho and Saul, 2009; Meronon et al., 2020), performing probabilistic inference (Khan et al., 2019), and quantifying uncertainty (Foong et al., 2019). These methods rely on linearizing the NN and the resultant neural tangent kernel (NTK, Jacot et al., 2018). The NN is characterized in function space by its first two moment functions, a mean function and covariance function (or kernel)--defining a Gaussian process (GP, Rasmussen and Williams, 2006). GPs provide a widely used probabilistic toolkit with principled uncertainty estimates. Given an approximate inference technique, we demonstrate that NNs emit 'dual' parameters which are the building blocks of a GP (Csato and Opper, 2002; Adam et al., 2021; Chang et al., 2023). In contrast to previous work that utilizes subsets of training data (Immer et al., 2021a), our parameterization captures information from _all_ data points in a sparse representation, essential for scaling to deep learning data sets. Importantly, the dual parameterization can be used to _(i)_ sparsify the GP without requiring further optimization (_e.g._, variational inference) whilst capturing information from all data points, and _(ii)_ incorporate new data without retraining by conditioning on new data, in a computationally efficient manner. The dual parameterization establishes a connection between Figure 1: **Regression example on an MLP with two hidden layers.** Left: Predictions from the trained neural network. Right: Our approach (sfr) equips trained NNs with uncertainty estimates. NNs, GPs, and sparse approximations similar to sparse variational GPs (Titsias, 2009). We refer to our method as Sparse Function-space Representation (sfr)--a sparse GP derived from a trained NN; see Fig. 1 for an example. ## 2 Background We consider supervised learning with inputs \(\mathbf{x}_{i}\in\mathbb{R}^{D}\) and outputs \(\mathbf{y}_{i}\in\mathbb{R}^{C}\) (_e.g._, regression) or \(\mathbf{y}_{i}\in\{0,1\}^{C}\) (_e.g._, classification), giving a data set \(\mathcal{D}=\{(\mathbf{x}_{i},\mathbf{y}_{i})\}_{i=1}^{N}\). We introduce a NN \(f_{\mathbf{w}}:\mathbb{R}^{D}\to\mathbb{R}^{C}\) with weights \(\mathbf{w}\in\mathbb{R}^{P}\) and use a likelihood function \(p(\mathbf{y}_{i}\,|\,f_{\mathbf{w}}(\mathbf{x}_{i}))\) to link the function values to the output \(\mathbf{y}_{i}\) (_e.g._, categorical for classification). For notational conciseness, we stick to scalar outputs \(y_{i}\). We denote the sets of all inputs as \(\mathbf{X}=\{\mathbf{x}_{i}\}_{i=1}^{N}\) and the set of all outputs \(\mathbf{y}=\{y_{i}\}_{i=1}^{N}\). **BNNs** In Bayesian deep learning, we place a prior over the weights \(p(\mathbf{w})\) and aim to calculate the posterior over the weights given the data \(p(\mathbf{w}\,|\,\mathcal{D})\). Given the weight posterior \(p(\mathbf{w}\,|\,\mathcal{D})\), we can make probabilistic predictions with \[p_{\text{BNN}}(y_{i}\,|\,\mathbf{x}_{i},\mathcal{D})=\mathbb{E}_{p(\mathbf{w} \,|\,\mathcal{D})}\left[p(y_{i}\,|\,f_{\mathbf{w}}(\mathbf{x}_{i}))\right]. \tag{1}\] The posterior \(p(\mathbf{w}\,|\,\mathcal{D})\propto p(\mathbf{y}\,|\,f_{\mathbf{w}}(\mathbf{ X}))\,p(\mathbf{w})\) is generally not available in closed form so we resort to approximations. **MAP** It is common to train NN weights \(\mathbf{w}\) to minimize the (regularized) empirical risk, \[\mathbf{w}^{*} =\arg\min_{\mathbf{w}}\mathcal{L}(\mathcal{D},\mathbf{w})\] \[=\arg\min_{\mathbf{w}}\sum_{i=1}^{N}\frac{\ell(f_{\mathbf{w}}( \mathbf{x}_{i}),y_{i})}{-\log p(y_{i}\,|\,f_{\mathbf{w}}(\mathbf{x}_{i}))}+ \frac{\delta\mathcal{R}(\mathbf{w})}{-\log p(\mathbf{w})}. \tag{2}\] This objective corresponds to the log-joint distribution \(\mathcal{L}(\mathcal{D},\mathbf{w})=p(\mathcal{D},\mathbf{w})\) as the loss can be interpreted as a negative log-likelihood \(\ell(f_{\mathbf{w}}(\mathbf{x}_{i}),y_{i})=-\log(p(y_{i}\,|\,f_{\mathbf{w}}( \mathbf{x}_{i}))\) and the regularizer corresponds to a log prior over the weights \(\delta\mathcal{R}(\mathbf{w})=-\log p(\mathbf{w})\). For example, a weight decay regularizer \(\mathcal{R}(\mathbf{w})=\frac{1}{2}\|\mathbf{w}\|_{2}^{2}\) corresponds to a Gaussian prior over the weights \(p(\mathbf{w})=\mathcal{N}(\mathbf{0},\delta^{-1}\mathbf{I})\), with prior precision \(\delta\). As such, we can view Eq. (2) as the maximum _a posteriori_ (MAP) solution. **Laplace approximation** The Laplace approximation MacKay (1992); Daxberger et al. (2021) builds upon this and approximates the weight posterior \(p(\mathbf{w}\,|\,\mathcal{D})\approx q(\mathbf{w})=\mathcal{N}(\mathbf{w}^{*},\mathbf{\Sigma})\) around the MAP weights (\(\mathbf{w}^{*}\)) by setting the covariance to the Hessian of the posterior, \[\mathbf{\Sigma}=-\left[\nabla_{\mathbf{w}\mathbf{w}}^{2}\log p(\mathbf{w}\,| \,\mathcal{D})|_{\mathbf{w}=\mathbf{w}^{*}}\right]^{-1}. \tag{3}\] Computing Eq. (3) requires calculating the Hessian of the log-likelihood \(\nabla_{\mathbf{w}\mathbf{w}}^{2}\log p(\mathbf{y}\,|\,f_{\mathbf{w}}( \mathbf{X}))\) from Eq. (2). As highlighted in Immer et al. (2021), computing this Hessian is often infeasible and in practice it is common to adopt the generalized Gauss-Newton (GGN) approximation, \[\nabla_{\mathbf{w}\mathbf{w}}^{2}\log p(\mathbf{y}\,|\,f_{\mathbf{w}}(\mathbf{ X}))\approx\mathcal{J}_{\mathbf{w}}(\mathbf{X})^{\top}\nabla_{\mathbf{if}}^{2} \log(\mathbf{y}\,|\,\mathbf{f})\mathcal{J}_{\mathbf{w}}(\mathbf{X}).\] where the Jacobian is given by \(\mathcal{J}_{\mathbf{w}}(\mathbf{x})\coloneqq\left[\nabla_{\mathbf{w}}f_{ \mathbf{w}}(\mathbf{x})\right]^{\top}\in\mathbb{R}^{C\times P}\). Immer et al. (2021) highlighted that the GGN approximation corresponds to a local linearization of the NN, \(f_{\mathbf{w}^{*}}^{\text{lin}}(\mathbf{x})=f_{\mathbf{w}}(\mathbf{x})+ \mathcal{J}_{\mathbf{w}^{*}}(\mathbf{x})(\mathbf{w}-\mathbf{w}^{*})\). This suggests that predictions should be made with a generalized linear model (GLM), given by \[p_{\text{GLM}}(y_{i}\,|\,\mathbf{x}_{i},\mathcal{D})=\mathbb{E}_{q(\mathbf{w})} \left[p(y_{i}\,|\,f_{\mathbf{w}^{*}}^{\text{lin}}(\mathbf{x}_{i}))\right]. \tag{4}\] **Gaussian processes** As Gaussian distributions remain tractable under linear transformations, we can convert the linear model from weight space to function space (see Ch. 2.1 in Rasmussen and Williams, 2006). As such (and shown in Immer et al. (2021)), the Bayesian GLM in Eq. (4) has an equivalent GP formulation, \[p_{\text{GP}}(y_{i}\,|\,\mathbf{x}_{i},\mathcal{D})=\mathbb{E}_{q(f_{i})} \left[p(y_{i}\,|\,f_{i})\right], \tag{5}\] where \(q(f_{i})=\mathcal{N}\left(f_{\mathbf{w}^{*}}(\mathbf{x}_{i}),\sigma_{i}^{2}\right)\) is the GP's predictive posterior with variance, \[\sigma_{i}^{2}=k_{ii}-\mathbf{k}_{\mathbf{xi}}^{\top}(\mathbf{K}_{\mathbf{x} \mathbf{x}}+\mathbf{\Lambda}^{-1})^{-1}\mathbf{k}_{\mathbf{xi}}, \tag{6}\] where the kernel \(\kappa(\mathbf{x},\mathbf{x}^{\prime})=\frac{1}{\delta}\mathcal{J}_{\mathbf{w} ^{*}}(\mathbf{x})\,\mathcal{J}_{\mathbf{w}^{*}}^{\top}(\mathbf{x}^{\prime})\) is the Neural Tangent Kernel (NTK, Jacot et al., 2018) and \(f_{i}=f_{\mathbf{w}}(\mathbf{x}_{i})\) is the function output at \(\mathbf{x}_{i}\). The \(ij^{\text{th}}\) entry of matrix \(\mathbf{K}_{\mathbf{x}\mathbf{x}}\in\mathbb{R}^{N\times N}\) is \(\kappa(\mathbf{x}_{i},\mathbf{x}_{j})\), \(\mathbf{k}_{\mathbf{xi}}\) is a vector where each \(j^{\text{th}}\) element is \(\kappa(\mathbf{x}_{i},\mathbf{x}_{j})\), \(k_{ii}=\kappa(\mathbf{x}_{i},\mathbf{x}_{i})\) and \(\mathbf{\Lambda}=-\nabla_{\mathbf{if}}^{2}\log p(\mathbf{y}\,|\,\mathbf{f})\) can be interpreted as per-input noise. **Sparse GPs** The GP formulation from Immer et al. (2021) (Eq. (5)) requires inverting an \(N\times N\) matrix which has complexity \(O(N^{3})\). This limits its applicability to large data sets, which are common in deep learning. Sparse GPs reduce the computational complexity by representing the GP as a low-rank approximation at a set of inducing inputs \(\mathbf{Z}=\{\mathbf{z}_{j}\}_{j=1}^{M}\in\mathbb{R}^{M\times D}\) with corresponding inducing variables \(\mathbf{u}=f(\mathbf{Z})\) (see Quinonero-Candela and Rasmussen, 2005, for an early overview). The approach by Titsias (2009) (also used in the DTC approximation), defines the marginal predictive distribution as \(q_{\mathbf{u}}(f_{i})=\int p(f_{i}\,|\,\mathbf{u})\,q(\mathbf{u})\,\mathrm{d}\mathbf{u}\) where the variational distribution is parameterized as \(q(\mathbf{u})=\mathcal{N}\left(\mathbf{u}\,|\,\mathbf{m},\mathbf{S}\right)\)(as in Titsias (2009); Hensman et al. (2013)). Assuming a zero mean function, the sparse GP predictive posterior is \[\mathbb{E}_{q_{\mathbf{u}}(f_{i})}[f_{i}] =\mathbf{k}_{\mathbf{xi}}^{\top}\mathbf{K}_{\mathbf{z}\mathbf{z}}^ {-1}\mathbf{m}, \tag{7a}\] \[\mathrm{Var}_{q_{\mathbf{u}}(f_{i})}[f_{i}] =k_{ii}-\mathbf{k}_{\mathbf{xi}}^{\top}(\mathbf{K}_{\mathbf{z} \mathbf{z}}^{-1}-\mathbf{K}_{\mathbf{z}\mathbf{z}}^{-1}\mathbf{S}\mathbf{K}_{ \mathbf{z}\mathbf{z}}^{-1})\mathbf{k}_{\mathbf{xi}},\] where \(\mathbf{K}_{\mathbf{z}\mathbf{z}}\) and \(\mathbf{k}_{\mathbf{xi}i}\) are defined similarly to the variational posterior (\(\mathbf{m}\) and \(\mathbf{S}\)) are usually obtained via variational inference, which requires further optimization. Not only does the GP formulation from Immer et al. (2021) (shown in Eq. (5)) struggle to scale to large data sets but it also cannot incorporate new data by conditioning on it. This is because its posterior mean is the NN \(f_{\mathbf{w}^{*}}(\mathbf{x})\). Our method overcomes both of these limitations via a dual parameterization. Importantly, our dual parameterization enables us to _(i)_ sparsify the GP without further optimization, and _(ii)_ incorporate new data without retraining. ## 3 Sfr: Sparse function-space representation of NNs In this section, we present our method, named sfr, which converts a trained NN into a GP. sfr is built upon a dual parameterization of the GP posterior. Early work by Csato & Opper (2002) showed that Expectation Propagation gives rise to a dual parameterization. More recent work by Adam et al. (2021); Chang et al. (2023) showed a different dual parameterization arising from the evidence lower bound for sparse variational GPs (Hensman et al., 2013; Titsias, 2009). The dual parameterization from Adam et al. (2021); Chang et al. (2023), consists of parameters \(\boldsymbol{\alpha}\) and \(\boldsymbol{\beta}\), which gives rise to the predictive posterior, \[\mathbb{E}_{p(f_{i}\mid\mathbf{y})}[f_{i}] =\mathbf{k}_{\mathbf{xi}}^{\top}\boldsymbol{\alpha}, \tag{8a}\] \[\mathrm{Var}_{p(f_{i}\mid\mathbf{y})}[f_{i}] =k_{ii}-\mathbf{k}_{\mathbf{xi}i}^{\top}(\mathbf{K}_{\mathbf{xx} }+\text{diag}(\boldsymbol{\beta})^{-1})^{-1}\mathbf{k}_{\mathbf{x}i}.\] Eq. (8) states that the first two moments of the resultant posterior process (which may not be a GP), can be parameterized via the dual parameters \(\boldsymbol{\alpha},\boldsymbol{\beta}\in\mathbb{R}^{N}\), defined as \[\alpha_{i} \coloneqq\mathbb{E}_{p(\mathbf{w}\mid\mathbf{y})}[\nabla_{f} \log p(y_{i}\,|\,f)|_{f=f_{i}}], \tag{9a}\] \[\beta_{i} \coloneqq-\mathbb{E}_{p(\mathbf{w}\mid\mathbf{y})}[\nabla_{ff} ^{2}\log p(y_{i}\,|\,f_{i})|_{f=f_{i}}]. \tag{9b}\] Eq. (9) holds for generic likelihoods and involves no approximations since the expectation is under the exact posterior, given that the model can be expressed in a kernel formulation. Eq. (8) and Eq. (9) highlight that the approximate inference technique, usually viewed as a posterior approximation, can alternatively be interpreted as an approximation of the expectation of loss (likelihood) gradients. Dual parameters from NNGiven that we use a Laplace approximation of the NN, we remove the expectation over the posterior (see Ch. 3.4.1 in Rasmussen & Williams, 2006, for derivation) and our dual parameters are given by \[\hat{\alpha}_{i} \coloneqq\nabla_{f}\log p(y_{i}\,|\,f)|_{f=f_{i}}, \tag{10a}\] \[\hat{\beta}_{i} \coloneqq-\nabla_{ff}^{2}\log p(y_{i}\,|\,f)|_{f=f_{i}}. \tag{10b}\] Substituting Eq. (10) into Eq. (8), we obtain our GP based on the trained NN. Making predictions with Eq. (8) costs \(O(N^{3})\), which limits its applicability on large data sets. Sparsification via dual parametersGiven that we have computed the dual parameters derived from our NN predictions and a kernel function, we could essentially employ any sparsification method (Quinonero-Candela & Rasmussen, 2005). In this work, we opt for the approach suggested by Titsias (2009); Hensman et al. (2013), which is shown in Eq. (7). However, instead of parameterizing the variational distribution as \(q(\mathbf{u})=\mathcal{N}\left(\mathbf{u}\,|\,\mathbf{m},\mathbf{S}\right)\), we follow insights from Adam et al. (2021) that the posterior under this model bears a structure akin to Eq. (8). As such, we project the dual parameters onto the inducing points giving sparse dual parameters. Using this sparse definition of the dual parameters, our sparse GP posterior is given by \[\mathbb{E}_{q_{\mathbf{u}}(\mathsf{f})}[f_{i}] =\mathbf{k}_{\mathbf{zi}}^{\top}\mathbf{K}_{\mathbf{z}\mathbf{ z}}^{-1}\boldsymbol{\alpha}_{\mathbf{u}}, \tag{11a}\] \[\mathrm{Var}_{q_{\mathbf{u}}(\mathsf{f})}[f_{i}] =k_{ii}-\mathbf{k}_{\mathbf{zi}}^{\top}[\mathbf{K}_{\mathbf{z} \mathbf{z}}^{-1}-(\mathbf{K}_{\mathbf{z}\mathbf{z}}+\boldsymbol{B}_{\mathbf{ u}})^{-1}]\mathbf{k}_{\mathbf{zi}},\] with sparse dual parameters, \[\boldsymbol{\alpha}_{\mathbf{u}}=\sum_{i=1}^{N}\mathbf{k}_{\mathbf{zi}}\,\hat {\alpha}_{i}\quad\text{and}\quad\boldsymbol{B}_{\mathbf{u}}=\sum_{i=1}^{N} \mathbf{k}_{\mathbf{zi}}\,\hat{\beta}_{i}\,\mathbf{k}_{\mathbf{zi}}^{\top}. \tag{12}\] Note that the sparse dual parameters are now a sum over _all data points_, with \(\boldsymbol{\alpha}_{\mathbf{u}}\in\mathbb{R}^{M}\) and \(\boldsymbol{B}_{\mathbf{u}}\in\mathbb{R}^{M\times M}\). Contrasting Eq. (11) and Eq. (8), we can see that the computational complexity went from \(\mathcal{O}(N^{3})\) to \(\mathcal{O}(M^{3})\), with \(M\ll N\). Crucially, given the structure of our probabilistic model, our sparse dual parameters Eq. (12) are a compact representation of the full model projected using the kernel. We highlight that sfr differs from the GP subset from Immer et al. (2021), which results in several advantages. First, sfr leverages a dual parameterization to construct a sparse GP, which captures information from the entire data set. In contrast, Immer et al. (2021) utilize a data subset which ignores information from the rest of the data set. Second, sfr makes predictions with the GP mean, whereas Immer et al. (2021) center predictions around NN predictions \(f_{\mathbf{w}^{*}}(\cdot)\). As such, sfr can incorporate new data without retraining whilst the GP subset cannot. To the best of our knowledge, we are the first to formulate a dual GP from a NN. Our dual parameterization has two main benefits: _(i)_ it enables us to construct a sparse GP without requiring us to perform further optimization, and _(ii)_ it enables us to incorporate new data without retraining by conditioning on new data (using Eq. (12)). ## 4 Experiments We evaluate sfr on eight UCI (Dua & Graff, 2017) binary and multi-class classification tasks. Following Immer et al. (2021), we used a two-layer MLP with width 50, tanh activation functions and a \(70\%\) (train) \(:15\%\) (validation) \(:15\%\) (test) data split. We trained the NN using Adam (Kingma & Ba, 2015) with a learning rate of \(10^{-4}\) and a batch size of \(128\). Training was stopped when the validation negative log predictive density (NLPD) stopped decreasing after 1000 steps. The checkpoint with the lowest NLPD was the NN MAP. Each experiment was run for \(5\) seeds. We first compare sfr to the Laplace approximation when making predictions with _(i)_ the nonlinear NN (BNN) and _(ii)_ the generalised linear model (GLM) in Eq. (4). We use the Laplace PyTorch library (Daxberger et al., 2021) with the full Hessian. It is common practice to tune the prior precision \(\delta\) after training the NN. That is, find a prior precision \(\delta_{\text{tune}}\) which has a better NLPD on the validation set than the \(\delta\) used to train the NN. Note that this leads to a different prior precision being used for training and inference. We highlighted that technically this invalidates the Laplace/SFR methods as the NN weights are no longer the MAP weights. Nevertheless, we report results when the prior precision \(\delta\) is tuned (Table 1 right) and is not tuned (Table 1 left). When tuning the prior precision \(\delta\), sfr matches the performance of the Laplace methods (BNN/GLM). Interestingly, sfr also performs well without tuning the prior precision \(\delta\), unlike the Laplace methods. As sfr captures information from all of the training data in the inducing points, we compare sfr to making GP predictions with a subset of the training data (GP subset). To make the comparison fair, we use sfr's inducing inputs \(\mathbf{z}\) as the subset. Table 1 shows that sfr is able to summarize the full data set more effectively than the GP subset method as it maintains predictive performance whilst using fewer inducing points. This is further illustrated in Fig. 2, which shows that as the number of inducing points is lowered from \(M=100\%\) of \(N\) to \(M=1\%\) of \(N\), sfr is able to maintain a much better NLPD. This demonstrates that sfr captures information from the entire data set and as a result can use fewer inducing points than the GP subset. ## 5 Discussion and conclusion We introduced sfr, a novel approach for representing NNs in function space. It leverages a dual parameterization to obtain a low-rank (sparse) approximation that captures information from the entire data set. We showcased sfr's uncertainty quantification in UCI benchmark classification tasks (Table 1). Unlike the Laplace approximation, sfr achieves good predictive performance without tuning the prior precision \(\delta\). This is interesting and we believe it stays more true to Bayesian inference. In practical terms, sfr serves a role similar to a sparse GP. However, unlike a conventional GP, it does not provide a straightforward method for specifying the prior.This limitation can be addressed indirectly: the architecture of the NN and the choice of activation functions can be used to implicitly specify the prior assumptions. It is important to note that sfr linearizes the NN around the MAP weights \(\mathbf{w}^{*}\), resulting in the function-space prior (and posterior) being a locally linear approximation of the NN. \begin{table} \begin{tabular}{l c c|c c|c c c|c c c c} \hline & & & & \multicolumn{4}{c}{No \(\delta\) tuning} & \multicolumn{4}{c}{\(\delta\) tuning} \\ & \(N\) & \(D\) & \(C\) & NN MAP & bnn & GLM & GP subset & sfr & bnn & GLM & GP subset & sfr \\ \hline Australian & 690 & 14 & 2 & **0.35**\(\pm\)0.06 & 0.71\(\pm\)0.03 & 0.43\(\pm\)0.04 & **0.39**\(\pm\)0.03 & **0.35**\(\pm\)0.04 & **0.34**\(\pm\)0.05 & **0.35**\(\pm\)0.05 & 0.41\(\pm\)0.04 & **0.35**\(\pm\)0.04 \\ Breast cancer & 683 & 10 & 2 & **0.09**\(\pm\)0.05 & 0.72\(\pm\)0.06 & 0.47\(\pm\)0.09 & 0.23\(\pm\)0.02 & 0.18\(\pm\)0.02 & **0.09**\(\pm\)0.05 & **0.09**\(\pm\)0.05 & **0.13**\(\pm\)0.03 & **0.08**\(\pm\)0.04 \\ Dorts & 351 & 34 & 2 & **0.07**\(\pm\)0.04 & 2.35\(\pm\)0.01 & 3.11\(\pm\)0.15 & **1.10**\(\pm\)0.02 & **1.07**\(\pm\)0.03 & **0.07**\(\pm\)0.03 & **0.07**\(\pm\)0.04 & 0.16\(\pm\)0.04 & **0.08**\(\pm\)0.03 \\ Glass & 214 & 9 & **1.02**\(\pm\)0.41 & 1.82\(\pm\)0.06 & 1.77\(\pm\)0.07 & 1.14\(\pm\)0.07 & **0.93**\(\pm\)0.08 & **0.87**\(\pm\)0.28 & **0.82**\(\pm\)0.27 & 1.19\(\pm\)0.08 & **0.92**\(\pm\)0.11 \\ Ionospheric & 846 & 18 & 4 & **0.38**\(\pm\)0.05 & 0.70\(\pm\)0.03 & **0.37**\(\pm\)0.04 & **0.48**\(\pm\)0.03 & **0.39**\(\pm\)0.03 & **0.38**\(\pm\)0.05 & **0.37**\(\pm\)0.05 & 0.44\(\pm\)0.03 & **0.39**\(\pm\)0.04 \\ Satellite & 1000 & 21 & 3 & **0.24**\(\pm\)0.02 & 1.83\(\pm\)0.02 & 0.78\(\pm\)0.04 & **0.32**\(\pm\)0.01 & **0.26**\(\pm\)0.02 & **0.24**\(\pm\)0.02 & **0.24**\(\pm\)0.02 & **0.43**\(\pm\)0.05 & 0.31\(\pm\)0.03 \\ Vehicle & 1797 & 64 & 10 & **0.40**\(\pm\)0.06 & 1.40\(\pm\)0.02 & 1.55\(\pm\)0.01 & **0.88**\(\pm\)0.02 & **0.85**\(\pm\)0.04 & **0.38**\(\pm\)0.06 & **0.37**\(\pm\)0.04 & 0.61\(\pm\)0.06 & **0.43**\(\pm\)0.02 \\ Waveform & 6435 & 35 & 6 & 0.40\(\pm\)0.05 & 1.10\(\pm\)0.01 & 1.00\(\pm\)0.02 & 0.44\(\pm\)0.03 & 0.38\(\pm\)0.02 & **0.35**\(\pm\)0.04 & **0.36**\(\pm\)0.03 & **0.36**\(\pm\)0.03 & **0.32**\(\pm\)0.03 \\ \hline \end{tabular} \end{table} Table 1: Comparisons on UCI data with negative log predictive density (NLPD\(\pm\)std, lower better). sfr (with \(M=20\%\) of \(N\)) is on par with the Laplace approximation (BNN/GLM) and outperforms the GP subset when the prior precision (\(\delta\)) is tuned (right). Interestingly, when the prior precision (\(\delta\)) is not tuned (left), sfr outperforms all other methods. The GP subset uses the same inducing points as sfr. Figure 2: Comparison of convergence in terms of number of inducing points \(M\) in NLPD (mean\(\pm\)std over 5 seeds) on UCI classification tasks: sfr () vs. GP subset (). Our sfr converges fast for all cases showing clear benefits of its ability to summarize all the data onto a sparse set of inducing points. The number of inducing points \(M\) are specified as a percentage of the number of training data \(N\).
2306.04361
Microdisk modulator-assisted optical nonlinear activation functions for photonic neural networks
On-chip implementation of optical nonlinear activation functions (NAFs) is essential for realizing large-scale photonic neural chips. To implement different neural processing and machine learning tasks with optimal performances, different NAFs are explored with the use of different devices. From the perspective of on-chip integration and reconfigurability of photonic neural network (PNN), it is highly preferred that a single compact device can fulfill multiple NAFs. Here, we propose and experimentally demonstrate a compact high-speed microdisk modulator to realize multiple NAFs. The fabricated microdisk modulator has an add-drop configuration in which a lateral PN junction is incorporated for tuning. Based on high-speed nonlinear electrical-optical (E-O) effect, multiple NAFs are realized by electrically controlling free-carrier injection. Thanks to its strong optical confinement of the disk cavity, all-optical thermo-optic (TO) nonlinear effect can also be leveraged to realize other four different NAFs, which is difficult to be realized with the use of electrical-optical effect. With the use of the realized nonlinear activation function, a convolutional neural network (CNN) is studied to perform handwritten digit classification task, and an accuracy as large as 98% is demonstrated, which verifies the effectiveness of the use of the high-speed microdisk modulator to realize the NAFs. Thanks to its compact footprint and strong electrical-optical or all-optical effects, the microdisk modulator features multiple NAFs, which could serve as a flexible nonlinear unit for large-scale PNNs.
Bin Wang, Weizhen Yu, Jinpeng Duan, Shuwen Yang, Zhenyu Zhao, Shuang Zheng, Weifeng Zhang
2023-06-07T11:47:00Z
http://arxiv.org/abs/2306.04361v1
# Microdisk modulator-assisted optical nonlinear activation functions for photonic neural networks ###### Abstract On-chip implementation of optical nonlinear activation functions (NAFs) is essential for realizing large-scale photonic neural chips. To implement different neural processing and machine learning tasks with optimal performances, different NAFs are explored with the use of different devices. From the perspective of on-chip integration and reconfigurability of photonic neural network (PNN), it is highly preferred that a single compact device can fulfill multiple NAFs. Here, we propose and experimentally demonstrate a compact high-speed microdisk modulator to realize multiple NAFs. The fabricated microdisk modulator has an add-drop configuration in which a lateral PN junction is incorporated for tuning. Based on high-speed nonlinear electrical-optical (E-O) effect, multiple NAFs are realized by electrically controlling free-carrier injection. Thanks to its strong optical confinement of the disk cavity, all-optical thermo-optic (TO) nonlinear effect can also be leveraged to realize other four different NAFs, which is difficult to be realized with the use of electrical-optical effect. With the use of the realized nonlinear activation function, a convolutional neural network (CNN) is studied to perform handwritten digit classification task, and an accuracy as large as 98% is demonstrated, which verifies the effectiveness of the use of the high-speed microdisk modulator to realize the NAFs. Thanks to its compact footprint and strong electrical-optical or all-optical effects, the microdisk modulator features multiple NAFs, which could serve as a flexible nonlinear unit for large-scale PNNs. optical nonlinear activation function, photonic neural network, microdisk modulator. ## 1 Introduction Over the past few years, artificial neural networks (ANNs) have revolutionized many technical foundations of emerging applications, such as autonomous driving, natural language classification, and medical diagnosis [1-4]. In turn, the computational requirements have escalated rapidly and posed great challenges to the traditional von Neumann computing architecture due to intrinsic bottlenecks in bandwidth and energy efficiency [5-7]. To overcome the limitations, photonic processors have emerged as a promising technology for computing accelerators, capable to provide high bandwidths, high parallelism, low latencies and low crosstalk [8-15]. Particularly, photonic integrated technologies provide a new platform for the implementation of photonic neural networks (PNNs) [16-18]. Recently, various photonic chips have been proposed to implement linear matrix-vector multiplication (MVM) by using microring resonator (MRR) and Mach-Zehnder interferometer (MZI) arrays [8, 10, 12]. Beyond the MVM section, one of the remaining challenges in PNN is the implementation of the nonlinear activation function. In ANNs, nonlinear activation functions (NAFs) are essential, because they allow to model target variables that vary nonlinearly with their explanatory variables, and enable ANNs to learn and perform more complex tasks. Also, different NAFs are required for different areas of neural processing and machine learning tasks with optimal performances. However, in most reported PNNs, the optical NAFs are always executed off-chip due to the absence of integrated optical nonlinear units. To tackle this problem, some integrated approaches have been proposed and demonstrated recently [19-36]. For instance, the photodetector-modulator systems have been demonstrated to exhibit a variety of electrical-optical (E-O) nonlinear transfer functions, which can be employed for different neural-processing tasks [23, 24]. The cavity-loaded MZI structures based on free-carrier dispersion (FCD) and Kerr effect in MRRs have been also used to realize programmable all-optical NAFs [29, 30]. And an MZI mesh-based linear transformer is also proposed to implement multiple types of NAFs [35]. However, the MZI structures can only implement few NAFs, and the large footprints and required power supplies may lead to increased operating costs. In addition, the devices based on germanium and silicon hybrid integration can implement three nonlinear responses [31, 32], while may increase the fabrication complexity. Therefore, from the perspective of on-chip integration and reconfigurability of photonic neural network (PNN), it is highly preferred that a single compact device can fulfill multiple NAFs. Recently, we reported an add-drop silicon microring to implement all-optical NAFs based on the thermo-optic (TO) effect. However, only few types of NAFs were realized [36]. In this paper, based on our previously reported work, we propose, fabricate and experimentally demonstrate an ultra-compact add-drop microdisk modulator to achieve multiple NAFs. Compared with MZI modulator, microcavity modulator has a smaller footprint and a low optical nonlinear threshold due to resonance-induced energy accumulation. Thus, microcavity modulators offer a great opportunity to realize E-O and all-optical NAFs, and implement more nonlinear responses [23, 32]. In the experiment, a variety of high-speed E-O NAFs can be implemented by electrically controlling free-carrier injection. Moreover, various all-optical NAFs are realized by employing the TO nonlinear effect of the fabricated add-drop microdisk resonator. In the experiment, the nonlinear responses at both through and drop ports could be tuned by controlling the wavelength detuning. The demonstrated nonlinear photonic integration unit may lay the foundation for developing fully on-chip PNNs. ## 2 Operation Principle As shown in Fig. 1(a), the nonlinear activation function is implemented by an add-drop silicon photonic microdisk modulator with a PN junction. The microdisk modulator presented here consists of a microdisk resonator coupled to two neighboring bus waveguides. The cross Figure 1: (a) Schematic of silicon photonic microdisk modulator with a PN junction. (b) A cross-section view of the microdisk modulator. section view of the microdisk modulator is shown in Fig. 1(b). In our design, a slab waveguide is introduced to make part of the sidewall further away from the fundamental whispering gallery mode (WGM), which can weaken the scattering loss from the sidewall roughness, and help alleviate the resonance-splitting. Furthermore, the introduction of the slab waveguide also gives a sufficient area to make a PN junction. Reverse-biased PN diodes and forward-biased PIN diodes have been demonstrated to realize high-speed modulation through plasma dispersion effect [37, 38, 39, 40]. By tuning the effective index of the waveguide, the resonant shift will induce a strong modulation of the transmitted signal. The schematic of O-E-O nonlinear activation function using a silicon photonic microdisk modulator with a PN junction is shown in Fig. 2(a). The O-E-O methods convert optical power into an electrical current and then back into the optical signal pathway, and the nonlinear responses usually occur during the E-O conversion process. Thanks to the Lorentzian-shaped transmission spectrum of the microdisk resonator, the nonlinear responses could be derived from the nonlinear E-O transfer function of the modulator. Because reverse-biased PN diodes require a much larger driving voltage than that of forward bias, and the depletion effect is too weak to demonstrate the desired results. Here, we use injection modulation to change the refractive index of guided mode, which can lead to a much larger resonant shift. As shown in Fig. 2(b), with the increase of forward driving voltage, the resonance of the resonator blueshifts Fig. 2: (a) The schematic of O-E-O nonlinear activation function using a silicon photonic microdisk modulator with a PN junction. (b) The transmission spectra shift and resulted nonlinear responses due to E-O modulation. (c)(e) The schematic of all-optical nonlinear responses at through port and drop port, respectively. (d)(f) The transmission spectra shift and resulted nonlinear responses due to TO effect. The red arrows indicate the direction of the wavelength shift. due to the accumulation of free carriers in the waveguide and the corresponding decrease of the refractive index of the silicon. In the through output port, different nonlinear responses can be implemented by tuning the driving voltage. For example, when the input wavelength is fixed at the initial resonant wavelength (point A) of the microdisk modulator, the output optical power in the through port will gradually increase as the applied driving voltage increases. When the resonant peak shifts far away from the input optical wavelength, the output optical power will remain unchanged. Thus, a sigmoid-like nonlinear activation function can be realized. Similarly, other relevant nonlinear transfer functions such as radial basis function and negative ReLU can also be implemented by bias tuning [23]. Different from the realization of O-E-O NAFs, all-optical nonlinear responses are dependent on the nonlinear effects of silicon microdisk resonator. The schematic diagram of realizing all-optical NAFs of the through port is shown in Fig. 2(c). The strong light-confinement nature of the silicon microdisk resonator can induce a nonlinear optical response with low input power [41, 42]. In the microdisk resonator, the absorbed optical energy by two-photon absorption (TPA) and free carrier absorption (FCA) is mainly lost by heat, which leads to a redshift of the resonance due to the TO effect. To implement a stable all-optical nonlinear activation function, the wavelength of the input light is fixed at or close to the initial resonant wavelength of the microdisk modulator. In Fig. 2(d), when the input wavelength is set at point A, the output power of the through port first increases linearly with the input power, and there is almost no resonant shift when the input power is relatively weak. When the input power increases, the resonant peak starts to redshift, and the output power increases rapidly due to resonance detuning. As a result, the output power has a Softplus-like nonlinear relationship with the input power. When the input wavelength is set at point B, the output power first increases linearly with low input power. Beyond the threshold, the resonant wavelength starts to redshift and the output power drops due to the notch filtering. With the input power further increases, the output power increases rapidly, finally forming a nonlinear curve RBF in Fig. 2(d). The schematic diagram of realizing all-optical NAFs of the drop port is shown in Fig. 2(e). In Fig. 2(f), the transmission spectrum of the drop port redshifts due to the TO effect. When the input wavelength is set at point A, the output power first increases linearly with low input power, since there is almost no resonant shift. When the input power continues to increase, the resonant peak starts to redshift, and the output power increases slowly due to resonance detuning and nonlinear loss. As a result, the output power has a saturated nonlinear relationship with the input power in Fig. 2(f). When the input wavelength is set at point B, the output power first increases linearly, and followed by a rapid rise as the input power increases. When the input power continues to increase, the resonant peak deviates from the input wavelength, which makes the output power increases slowly, resulting in a nonlinear relationship in Fig. 2(f). Notably, since the input light wavelength is aligned with or close to the initial resonant peak, the optical bistability could be neglected in the proposed approach and stable nonlinear responses could be implemented [32, 41, 50]. ## 3 Device Fabrication and Characterization Figure. 3(a) shows the microscope image of the fabricated add-drop microdisk modulator used in the experiment. The device is fabricated on a standard 220-nm silicon-on-insulator (SOI) platform. The microdisk resonator has a shallow-etched slab waveguide to surround the disk and the lateral sides of the bus waveguides. The incorporation of a lateral PN junction is used to achieve the electrical tunability of the microdisk modulator. To optimize the tuning efficiency, the overlap between the PN junction and the fundamental WGM is maximized by slightly shifting the center of the PN junction inward by 500 nm from the edge of the microdisk. Most of the disk is designed to be p-type doped, while the edge of the disk and the slab waveguide are n-type doped since the plasma dispersion effect is more sensitive to the change of the free-hole concentration. In addition, there is no doping along the arc with an angle of 90' near the coupling region, to ensure that the doping does not deteriorate the optical coupling between the bus waveguide and the disk. To satisfy the phase-matching condition for optical coupling between the bus waveguide and the microdisk, the effective refractive index of the fundamental TE\({}_{0}\) mode supported by the bus waveguide is required to be equal to that of the fundamental whispering gallery mode (WGM) supported by the microdisk resonator. The bus waveguide is designed to have a width of 600 nm and the microdisk has a radius of 10 \(\upmu\)m. The coupling gap has a width of 200 nm. As shown in Fig. 3(a), four TE\({}_{0}\)-mode grating couplers with a center-to-center space of 127 \(\upmu\)m are used to couple light between the chip and the input/output fibers. The grating coupler has a period of 655 nm. Figures. 3(b) shows the zoom-in view of the microdisk resonator. The performance of the fabricated microdisk is first evaluated by using an optical vector analyzer (LUNA OVA). Figure. 3(c) shows the measured transmission spectrum with zero bias voltage. The free spectral range (FSR) is estimated to be 10.5 nm. Figure. 3(d) shows the zoom-in view of one single resonance, and its resonant wavelength is at 1548.71 nm. The resonance has a 3-dB bandwidth of 70 pm, a Q-factor of 22000, and an extinction ratio of 11 dB. The insertion loss of the device is measured to be \(\sim\)13 dB around 1550 nm, most of which is caused by the fiber-to-fiber I/O coupling loss. As shown in Fig. 4(a), we further evaluate the performance of the fabricated microdisk modulator when the PN junction is reverse biased. With the increase of reverse-biased voltage, more free carriers are extracted and the depletion region is widened. As a result, the effective refractive index of the waveguide mode is increased, which leads to a redshift of the resonance. In the meanwhile, the decrease in the number of free carriers would reduce the free-carrier induced absorption loss, which could further enhance the Q-factor of the microdisk, and the extinction ratio is also improved since the coupling condition is reaching the critical condition. When the reverse-biased voltage is 8 V, the resonant wavelength is redshifted by 25 pm, and the extinction ratio is increased by 0.8 dB. Figure. 4(c) shows the tuning of the transmission spectra, which gives a wavelength shift rate of 2.4 pm/V. The measurement results for the PN junction being forward biased are shown in Fig. 4(b). When the forward-biased voltage increases, the refractive index of the waveguide mode decreases due to the free-carrier injection. Thus, the excess absorption loss becomes larger, which would degrade the Figure 3: (a) Microscope image of the silicon microdisk modulator used in the experiment. (b) Details of the fabricated microdisk modulator. (c) The normalized transmission spectrum of the microdisk resonator. (d) Zoom-in view of one single resonance around 1548.71 nm. performance of the fabricated microdisk modulator. As can be seen in Fig. 4(b), the resonance blueshifts with the Q-factor and extinction ratio significantly reduced. For example, when the forward-biased voltage is 1 V, the resonant wavelength is blueshifted by 202 pm. The Q-factor is reduced to 18400 and the extinction ratio is decreased by 4.5 dB. Figure. 4(d) shows the tuning of the transmission spectra, which gives a wavelength shift rate of \(-1.74\) nm/V after the PN junction is turned on. Apparently, reverse-biased PN diodes require a much larger driving voltage than that of forward bias. ## 4 Experiment and discussion We carry out an experiment to measure the E-O NAFs based on the fabricated microdisk modulator. In the experiment, we use an external voltage source to drive the fabricated microdisk modulator and measure the output temporal signal. The fabricated device is mounted on a micro-positioning stage, and the temperature is kept at approximately 26 \({}^{\circ}\)C controlled by a temperature controller (TEC). The RF electric signal from an arbitrary waveform generator (AWG) is then loaded onto the fabricated device through a high-speed microwave probe. The output light is detected by a high-speed photodetector (PD), and then sent into an oscilloscope to monitor the real-time waveforms. To implement various NAFs, the input light is modulated by a triangle waveform at 2.5 MHz, derived from an AWG. The type of nonlinear response depends on the bias condition and the input light wavelength. As shown in Fig. 5, three typical nonlinear response shapes are realized under different forward bias conditions, each relevant in different areas of neural processing and machine learning [43, 44, 45]. In Figs. 4(b) and 4(d), the forward-biased PN junction can be turned on at about 0.7 V. As the driving voltage continues to increase, the resonance blueshifts with both Q-factor and extinction ratio reduced. In Fig. 5(a), the input electrical signal applied on the fabricated modulator has an amplitude of 0.6 V and an offset of 1 V. The input wavelength is located at Figure 4: (a)(b) Measured transmission spectra when the PN junction is reverse and forward biased. The red arrows indicate the direction of the wavelength shift. (c)(d) Measured wavelength shift as the reverse and forward bias voltages change. the resonant peak (\(\sim\)1548.71 nm). With the increase of the applied voltage, the resonant peak gradually moves away from the input light wavelength. Thus, the output optical power in the through port gradually increases and then remains constant. The output electrical signal from PD is recoded by an oscilloscope, which shows a periodic nonlinear curve. The corresponding input and output voltage response curves are presented in Fig. 5(b). Such a sigmoid-like function is commonly used in recurrent Hopfield networks for nonlinear optimization [43]. In Fig. 5(c), the input electrical signal applied on the fabricated modulator has an amplitude of 1 V and an offset of 1.1 V. The input wavelength is set to be 1548.61 nm, which is blue-detuned. As the driving voltage increases, the output optical power in the through port gradually reduces to the minimum value and then increases due to the blue shift of the resonance. As a result, the radial basis function is obtained in Fig. 5(d), which is commonly used for ML based on support-vector machines [44]. In Fig. 5(e), the electrical signal applied on the fabricated modulator has an amplitude of 1 V and an offset of 0.94 V. The input wavelength is set to be 1547.8 nm, which is far away from the resonance. As the driving voltage increases, the output optical power in the through port remains the same at the beginning and then decreases due to the blue shift of the resonance. As a result, a negative ReLU shape is obtained in Fig. 5(f). Figure 5: A variety of relevant nonlinear transfer functions from the modulator, taken at different forward bias conditions: (a)(b) sigmoid function, (c)(d) radial basis function, (e)(f) negative ReLU function. The frequency of the input signal is 2.5 MHz. Then, we measure the high-speed E-O nonlinear responses. Here, the initial resonant peak of the microdisk resonator is around 1549.39 nm. As shown in Fig. 6(a), the frequency of the input triangle waveform is set to be 500 MHz. The bias voltage is 0.6 V and the input electrical signal applied on the fabricated modulator has an amplitude of 1 V. With the change of input wavelength, different waveforms are measured. For example, when the input wavelength is set to be 1549.39 nm, the black curve in Fig. 6(a) is obtained, which shows a periodic sigmoid-like nonlinear curve. When the input optical wavelength changes to 1549.29 nm and 1549.19 nm, the radial basis and ReLU functions are implemented. Similar results at the frequency of 1 GHz are shown in Fig. 6(b). It is apparent that the different nonlinear responses correspond to different pieces of the microdisk resonator's Lorentzian shape, and more response shapes can be implemented by tuning the bias voltage and wavelength. The implemented nonlinear E-O transfer functions are relevant for a wide variety of neural-processing tasks. Since photodetectors' O-E response is linear, a high-speed photodetector can be further integrated with the modulator to realize chip-scale reconfigurable O-E-O NAFs [46-49]. Figure 6: Measured output waveform at different wavelengths. (a) The frequency of input signal is 500 MHz, and the bias voltage is 0.6 V. (b) The frequency of input signal is 1 GHz, and the bias voltage is 0.6 V. In addition, the fabricated microdisk modulator can be used to realize different all-optical NAFs via TO nonlinear effect. A microdisk resonator accumulates optical energy at its resonant wavelength. With high input power at resonant wavelength, the power density inside the microdisk resonator will be amplified substantially because of its resonant enhancement property and ultra-small geometric dimensions, which therefore induces TO effect in silicon waveguide and causes a red resonance shift. The optical nonlinear effects include the Kerr effect, TPA, FCA, free-carrier dispersion (FCD), and carrier-related TO effect in the silicon microdisk. With continuous light input, the TO effect significantly dominates over other nonlinear effects. We measure the transmission spectra and nonlinear transfer functions at the through and drop ports of the fabricated microdisk modulator. As can be seen in Figs. 7(a) and 7(b), the resonant peak redshifts with the increase of input optical power. We keep the input light wavelength fixed to the initial resonant wavelength, and measure the output power of the resonator. We measure the transmission spectra and nonlinear transfer functions at the input power of the resonator. through port with the change of input power. As shown in Fig. 7(c), the output optical power of the through port first increases linearly with the input optical power, and then gradually rapidly. In Fig. 7(c), the power input and output are plotted in milliwatts, and the inset shows the trend for the input optical power less than 0.1 mW. The nonlinear threshold is measured to be 0.082 mW. We further demonstrate radial basis function by tuning the input light wavelength slightly above the initial resonant peak. As shown in Fig. 7(d), different nonlinear curves are measured by adjusting the wavelength detuning. It is noteworthy that the bistability only occurs at the wavelength far away from the resonant wavelength. When the wavelength detuning is small, the bistability has no impact on the reliability of the measured nonlinear functions. As shown in Figs. 7(e) and 7(f), different NAFs are realized at the drop port by tuning the wavelength. It is noteworthy that although our measured NAFs are not the exact normally used NAFs, they can be regarded as an approximation. And these measured NAFs also perform well in neural networks. We then measure the response speed of the implemented all-optical NAFs. The input light beam is first modulated by an intensity modulator loaded with a sawtooth or square electrical signal before entering the device. The output light from the device is detected by a PD and then captured by an oscilloscope. As shown in Fig. 8(a), we send a sawtooth signal into the device and measure the output signal of the through port, which agrees well with the measured nonlinear response in Fig. 7(c). Furthermore, to measure the accurate response time, we send a square wave signal into the device and get the output signal shown in Fig. 8(b). We get the rise and fall time of ~12 us, which is the time that output signal takes to become stable when the input signal changes its level. Figure 8(c) shows the nonlinear response at the through port with a wavelength detuning of 30 pm, which is in good agreement with the result in Fig. 7(d). In Fig. 8(d), the measured nonlinear response of the drop port also agrees well with the result in Fig. 7(e) when sending a sawtooth signal into the device. To verify the effectiveness of the implemented NAFs, we perform a simulation of convolutional neural network (CNN) based-MNIST handwritten digit classification. As shown in Fig. 9(a), the input image of a handwritten digit is a 2D matrix comprising 28\(\times\)28 pixels. The experimentally measured all-optical nonlinear activation function (clamped ReLU) in Fig. 7(e) Figure 8: (a)(b) Measured waveforms at the through port after MZM sawtooth signal and square wave signal modulation. (c) Measured waveform of the through port with wavelength detuning of 30 pm. (d) Measured waveform of the drop port after MZM sawtooth modulation. is used. The input undergoes convolutions, experimental ReLU5 operations, and pooling (maxpool), followed by two fully connected layers. We also adopt Linear rectification function (ReLU) as a comparison of optical activation function. The MNIST dataset is divided into training and test samples, with batch sizes of 32 and 2000, respectively. The simulated model is built in PyTorch, and it is trained by the AdamOptimizer with a default learning rate of 0.001. Figure 9(b) shows the negative log-likelihood loss as a function of samples fed into the network, during the training and test stages. The network is found to converge within 1 epoch to negligible loss and 98% accuracy during testing for both NAFs. As can be seen, this network converges faster when using the all-optical nonlinear activation function. Figure 9(c) shows a sample of handwritten digit inputs, with labels correctly predicted by the network. ## 5 Discussion Table1 shows the detailed comparison of different approaches based on integrated platforms. The add-drop microdisk modulator has a low all-optical nonlinear threshold, which provides an opportunity for implementation multiple E-O and all-optical NAFs. In the experiment, we demonstrate three typical E-O NAFs, and other three types nonlinear responses could be also implemented by further tuning the device [23]. The dynamic evolution of the nonlinear responses is also illustrated in the experiment. The proposed add-drop silicon microdisk modulator has a nonlinear E-O response speed of 1 GHz due to free-carrier injection modulation. Four different all-optical NAFs are also realized. The all-optical nonlinear response speed is less than 100 kHz due to the TO effect, which could be further improved to realize a much higher response speed up to GHz by using FCD or Kerr effect [29, 30]. With further improvements, a microheater could be integrated on the microdisk modulator to precisely control the resonance peak. Thus, both E-O and all-optical NAFs could be Figure 9: (a) Schematic of MNIST handwritten digit classification CNN, comprising convolution, maxpool, experimental nonlinear function, and two fully connected layers, resulting in outputs in the range of [0, 9]. (b) Negative log likelihood loss (left axis) during training and testing stages, and accuracy during testing (right axis) as a function of number of samples. (c) Example of network predicted labels corresponding to six input images. programmable and reconfigurable [29; 30]. In addition, it should be noted that the optical bistability curve strongly depends on the wavelength detuning between the input optical source and the initial resonant peak, which may be not conducive to the realization of stable NAFs [41; 42]. To solve this problem, we implement stable all-optical NAFs by controlling the wavelength detuning. In the experiment, the input light wavelength is slightly off or aligned with the initial resonant peak of the add-drop microdisk resonator, which could greatly reduce the effect of bistability and provide a stable nonlinear response. Furthermore, self-pulsation could be also suppressed in the silicon MDR modulator based on free-carrier depletion [50]. **Table1. Detailed comparison of different approaches based on photonic integrated platforms** \begin{tabular}{|c|c|c|c|c|} \hline Technologies & Materials & Types & Speed & Mechanism \\ \hline PD-Modulator [23] & Si & 6 & GHz & E-O \\ \hline MZI [24] & SiN & 2 & GHz & E-O \\ \hline EA modulator [25] & ITO-Si & 1 & GHz & E-O (FCA) \\ \hline MZI-MRR [29] & Si & 4 & GHz & O (FCD effect) \\ \hline MZI-MRR [30] & SiN & 4 & 10 GHz & O (Kerr effect) \\ \hline Ge/Si waveguide [31] & Ge/Si & 1 & 70 MHz & O (FCD) \\ \hline Ge/Si microring [32] & Ge/Si & 3 & \(<\)100 kHz & O (TO effect) \\ \hline MZI mesh [35] & Si & 2 & / & O (Interference) \\ \hline Si microring [36] & Si & 4 & \(<\)100 kHz & O (TO effect) \\ \hline This work & Si & 6 & GHz & E-O \\ \cline{2-5} & 4 & \(<\)100 kHz & O (TO effect) \\ \hline \end{tabular} "O" refers to all-optical implementation of nonlinear activation function. ## 6 Conclusion In summary, we propose and demonstrate an ultra-compact integrated microdisk modulator to achieve multiple NAFs based on silicon photonic platform. Based on the plasma dispersion effect, the fabricated add-drop microdisk modulator exhibits a variety of high-speed nonlinear E-O transfer functions including sigmoid, radial basis, and ReLU by electrically controlling free-carrier injection modulation. The measured E-O nonlinear response speed is up to 1 GHz. Meanwhile, four different all-optical nonlinear responses are also realized by employing the TO nonlinear effect of the silicon microdisk resonator. The measured all-optical nonlinear response speed is about 100 kHz. Moreover, we simulate a benchmark task, a CNN based-handwritten digit classification with an experimentally measured activation function and obtain an accuracy of 98%. The demonstrated microdisk modulator holds great potential for the implementation of large-scale PNNS. ## Declaration of competing interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. ## Acknowledgements This work was supported by the National Key R&D Program of China (No. 2018YFE0201800), and the National Natural Science Foundation of China (NSFC) (62105028, 62071042, 62005018).
2302.02595
Clarifying Trust of Materials Property Predictions using Neural Networks with Distribution-Specific Uncertainty Quantification
It is critical that machine learning (ML) model predictions be trustworthy for high-throughput catalyst discovery approaches. Uncertainty quantification (UQ) methods allow estimation of the trustworthiness of an ML model, but these methods have not been well explored in the field of heterogeneous catalysis. Herein, we investigate different UQ methods applied to a crystal graph convolutional neural network (CGCNN) to predict adsorption energies of molecules on alloys from the Open Catalyst 2020 (OC20) dataset, the largest existing heterogeneous catalyst dataset. We apply three UQ methods to the adsorption energy predictions, namely k-fold ensembling, Monte Carlo dropout, and evidential regression. The effectiveness of each UQ method is assessed based on accuracy, sharpness, dispersion, calibration, and tightness. Evidential regression is demonstrated to be a powerful approach for rapidly obtaining tunable, competitively trustworthy UQ estimates for heterogeneous catalysis applications when using neural networks. Recalibration of model uncertainties is shown to be essential in practical screening applications of catalysts using uncertainties.
Cameron Gruich, Varun Madhavan, Yixin Wang, Bryan Goldsmith
2023-02-06T07:03:02Z
http://arxiv.org/abs/2302.02595v1
Clarifying Trust of Materials Property Predictions using Neural Networks with Distribution-Specific Uncertainty Quantification ###### Abstract It is critical that machine learning (ML) model predictions be trustworthy for high-throughput catalyst discovery approaches. Uncertainty quantification (UQ) methods allow estimation of the trustworthiness of an ML model, but these methods have not been well explored in the field of heterogeneous catalysis. Herein, we investigate different UQ methods applied to a crystal graph convolutional neural network (CGCNN) to predict adsorption energies of molecules on alloys from the Open Catalyst 2020 (OC20) dataset, the largest existing heterogeneous catalyst dataset. We apply three UQ methods to the adsorption energy predictions, namely \(k\)-fold ensembling, Monte Carlo dropout, and evidential regression. The effectiveness of each UQ method is assessed based on accuracy, sharpness, dispersion, calibration, and tightness. Evidential regression is demonstrated to be a powerful approach for rapidly obtaining tunable, competitively trustworthy UQ estimates for heterogeneous catalysis applications when using neural networks. Recalibration of model uncertainties is shown to be essential in practical screening applications of catalysts using uncertainties. ## 1 Introduction Machine learning (ML) approaches have rapidly grown in popularity to accelerate catalyst screening and understanding [1, 2, 3, 4]. Making predictions of catalyst properties with ML models is orders of magnitude faster compared to first-principles simulation of catalysts, for example, using density functional theory (DFT) modeling to compute adsorption energies of molecules. DFT modeling combined with ML has emerged as a compelling approach for rapid materials characterization, enabling a several-orders-of-magnitude expansion in the number of materials able to be studied compared to DFT modeling alone [5, 6, 7]. However, the potential of ML models to efficiently explore large catalyst spaces can only be achieved if it is simple to detect when ML model predictions are accurate or highly uncertain. Uncertainty provides a means to infer the accuracy of a model without explicitly knowing the accuracy, but often this uncertainty of the ML model cannot be trusted [8]. Reliable uncertainty quantification (UQ) of ML model predictions is a fundamental challenge to ML-guided materials and molecule discovery [8, 9, 10, 11, 12]. Predictive uncertainty can be estimated with either distribution-specific methods or distribution-free methods (e.g., Monte Carlo dropout [13] with a Gaussian-specific calibration assumption [14] or conformal UQ [15] respectively). By distribution-specific methods, we mean methods that assume an underlying distribution shape to the uncertainty of model predictions or their calibrated scaling, while distribution-free methods make no such assumption. Established UQ methods for ML models are often costly to obtain and have limitations in assessing prediction errors for chemical space exploration [16]. Effective uncertainty estimates of predictions are an important design element to enhance several advanced ML strategies for materials discovery, such as high-throughput screening [17, 18], transfer learning [19], and active learning [20, 21], as illustrated in **Fig. 1**. For catalyst discovery, one can ascertain if an ML model is making accurate predictions of catalyst behavior by confirming the predicted properties experimentally or through first-principles calculations. In a high-throughput context where many thousands of systems or more are being studied, frequently confirming the predictive accuracy of the ML model via experiment or first-principles calculations is impractical. UQ is a means by which one can quantify the trustworthiness of ML predictions in a way that is practical and attainable for materials discovery strategies that explore vast materials spaces. ML-guided studies of catalysts that apply uncertainty have already been performed, but these studies historically rely on Gaussian process regression [22, 23, 24, 25]. Gaussian process regression is formulated from Bayesian statistics and directly outputs uncertainty estimates of the predictions [26]. However, the computational cost of training a Gaussian process regression model typically scales \(O(N^{5})\) and thus grows unfavorably with dataset size, which is an open challenge for big-data applications [27]. Therefore, there is a practical limitation of these models for high-throughput discovery strategies. Neural networks (NNs) are popular because of their relatively high accuracy across diverse chemical spaces and excellent performance for large datasets [28, 29, 30, 31, 32]. For the Open Catalyst 2020 (OC20) dataset [33], state-of-the-art deep NN model variants (e.g., CGCNN [34], SchNet [35], and DimeNet++ [36]) have shown up to a 40% improvement in their accuracy of adsorption energy predictions by using 460,000 training samples versus 10,000 samples [33]. The arrival of large training datasets--and innovations in NN model architectures--has led to steady improvements in accuracy for catalyst property predictions, such as molecular adsorption energies [37]. However, less attention in the field has been devoted to understanding and quantifying the predictive uncertainty associated with these NN architectures applied to prevailing catalysis datasets [14, 15]. For uncertainty-guided sample acquisition and efficient model training for catalysis using NNs, it is crucial to ensure that the uncertainty estimates from a given UQ method are reliable and useful. Typical metrics for evaluating uncertainty estimates include accuracy, sharpness, dispersion, calibration, and tightness, which are discussed in Section 2.4. The reliability of a UQ estimate is often primarily discussed in terms of calibration, which refers to how well the uncertainty estimate represents a confidence interval that encloses the ground-truth target [38, 39, 40, 41, 42, 43]. Calibrated uncertainty estimates can be used to forecast the accuracy of materials predictions, thereby quantifying the trustworthiness of an ML model to examine and propose materials for further study. Using calibrated uncertainty estimates to forecast whether a large set of ML predictions are reliably accurate or not is common to many applications, such as autonomous driving [44], image analysis [45], and materials discovery [46]. Herein, we compare three different UQ methods, namely, \(k\)-fold ensembling [47, 48, 49], Monte Carlo (MC) dropout [13], and evidential regression [50, 51] to better ensure effective use of NN model architectures for high-throughput catalyst discovery strategies. All three methods are able to express uncertainty as a standard deviation (\(\sigma\)). Evidential regression has seen use in small molecule prediction [50] but has not yet been explored for catalysis applications to our knowledge. These three UQ methods are studied using a crystal graph convolutional neural network (CGCNN) to predict adsorption energies of molecules on solid catalyst materials based on the OC20 dataset, many of which are binary and ternary alloys [33]. Alloy catalysts are highly relevant in industrial applications, such as the Haber-Bosch process [52], catalytic cracking of hydrocarbons [53], and naphtha reforming [54]. The OC20 is an ideal test case for understanding UQ methods because it is the largest and most diverse Fig. 1: **Advanced materials discovery strategies are enabled with uncertainty quantification**. Screening, active learning, and transfer learning are enhanced by trustworthy estimates of predictive uncertainty, particularly in high-throughput applications where the size of the uncertainty estimate is used to infer model accuracy. The oracle refers to some trustworthy system that outputs the desired target, such as DFT-accurate materials properties. heterogeneous catalysis dataset, which represents a case study for many material search challenges. We use accuracy, sharpness, dispersion, calibration, and tightness as UQ metrics to compare the trustworthiness of each of the three UQ methods, **Fig. 2**. We define a trustworthy UQ method as one that is perfectly calibrated because calibration measures how well the uncertainty intervals probabilistically grow or shrink with the predictive error. In practice, one may not know if a UQ method is perfectly calibrated; moreover, it is possible for two UQ methods to have the same average measure of calibration but have different uncertainty estimates for the same materials, so accuracy, sharpness, dispersion, and tightness are discussed as secondary trustworthiness criteria when comparing UQ methods. We particularly emphasize differences in calibration between UQ methods via adversarial group calibration [55] to determine their appropriateness for high-throughput catalyst discovery strategies. Scalar recalibration [56] is applied to address poor calibration performance between UQ methods. Evidential regression is found to be competitively trustworthy before recalibration and the most trustworthy after recalibration. We demonstrate the use of UQ and evidential regression to enumerate materials for DFT-predicted adsorption in the case of a hydrogen adsorbate. Improved reliability in the uncertainty estimates for this demonstration is observed after recalibration. This work will guide future efforts of uncertainty-guided catalyst search and discovery using ML. ## 2 Experimental Methods ### Dataset: Open Catalyst 2020 We used the OC20 dataset to train and test our CGCNN model to predict adsorption energies of molecules on catalyst surfaces [33]. OC20 encompasses a state-of-the-art collection of DFT calculations of adsorption energies on binary and ternary alloy catalysts spanning the periodic table, as well as pure metals. The dataset is comprised of 82 nitrogen, oxygen, and carbon-containing adsorbates. OC20 has over 130,000,000 data points associated with 5,243 unique alloy catalyst compositions. Within OC20, there are different versions of datasets depending on what task an ML model is performing to predict adsorption energy. We performed the initial-structure-to-relaxed-energy (IS2RE) task, where the ML model accepts an unrelaxed initial structure of an adsorbate/alloy system and predicts the relaxed energy of the system. Electronic energies of the geometry-optimized (i.e., relaxed) gas phase species and the bare alloy surface are subtracted from the combined adsorbate/alloy system electronic energy to obtain an adsorption energy at 0 K [33]. We trained the CGCNN models on the entire IS2RE training dataset composed of 460,328 systems. To benchmark the model accuracy and the effectiveness of the UQ methods, test adsorption energy predictions were made on the entire IS2RE Validation In-Domain (Val-ID) dataset composed of 24,943 systems. The Val-ID dataset was composed of adsorbate/alloy systems that the model has never seen before during training. We herein refer to the Val-ID dataset as the test dataset, and we use this dataset as the test dataset because the dedicated test dataset provided within OC20 does not include target labels due to its usage in leaderboard competitions. ### Model: CGCNN We used a CGCNN model for adsorption energy predictions [33, 34, 57]. The CGCNN in our study is a deep-learning convolutional NN of atomic surface structures using the distances between atoms in the crystal structure as encoded edge information. Gaussian basis functions are used to encode the distances between atoms [33]. Descriptions of unique bonds between atoms (i.e., atom embeddings) and the distances between atoms (i.e., distance embeddings) were computed. The atom and distance embeddings were combined with element-specific properties of atoms in the system [33, 34]. These combined input features pass through convolutional layers that reduce the incoming high-dimensional vector to a fixed lower-dimensional vector. The model accepts adsorbate/alloy systems of a varying number of atoms because all the data in transit inside the model gets convolved to the same number of dimensions. After the convolutional layers, the data was transformed by a series of fully connected layers. Model parameters were optimized to produce accurate predictions via mini-batch stochastic gradient descent. The best reported hyperparameters reported in the original OC20 dataset report were used for our CGCNN model [33]. Before applying any UQ method, our baseline CGCNN model had a 0.651 eV mean absolute error (MAE) on the IS2RE Val-ID dataset, which is comparable to the expected error reported in the literature [33]. ### Uncertainty Quantification Methods We compared three different UQ methods from the literature: \(k\)-fold ensembling [47, 48, 49], MC dropout [13], and evidential regression [50, 51], **Fig. 2a**. Each adsorption energy prediction was expressed as a mean prediction \(\mu\) with uncertainty as a standard deviation \(\sigma\) centered around the prediction. Uncertainty intervals around each prediction were constructed using the associated value of the standard deviation \(\sigma\) (i.e., an interval of \(\mu\pm 3\sigma\)). From an application standpoint, these uncertainty intervals are interpreted similarly to a confidence interval in that these intervals quantify the lack of confidence in ML predictions. Unlike confidence intervals, uncertainty intervals only have coverage guarantees if a UQ method is perfectly calibrated, **Fig. 2c**. _k-fold Ensembling:_ The _k_-fold ensemble method estimates uncertainty by segregating the training dataset into \(k\) nonoverlapping subsets (i.e., folds) that are used to train \(k\) models. Each model makes adsorption energy predictions on the same test set, but Fig. 2: **Uncertainty quantification workflow to predict adsorption energies of molecules on catalysts. (a) Three UQ methods (left) _k_-fold ensemble method, (middle) Monte Carlo dropout, and (right) evidential regression are applied onto a crystal graph neural network to give both a property prediction and associated uncertainty. (b) UQ metrics of accuracy, sharpness, dispersion, calibration, and tightness are applied to compare UQ methods. (c) UQ metrics are interpreted to assess the trustworthiness of materials predictions.** the predictions for these same systems will be different from model to model because each model was trained on a different fold of training data. For every adsorbate/alloy system, \(k\) predictions were averaged together to produce a mean adsorption energy prediction \(\mu\). The uncertainty (i.e., a standard deviation \(\sigma\)) was calculated for each average prediction \(\mu\). Herein, we used \(k=5\). _Monte Carlo Dropout:_ MC dropout estimates uncertainty by modifying the fully connected layers of the NN model. Dropout was applied both during and after training when making predictions--this protocol is necessary to approximate a Gaussian process [13]. Like the ensemble method, all the adsorption energy predictions corresponding to any specific system were used to calculate a mean adsorption energy prediction \(\mu\) and an associated uncertainty \(\sigma\) for that system. We refer to the different adsorption energy predictions for the same adsorbate/alloy system as MC dropout samples. When we say an uncertainty estimate is a 1,000-sample MC dropout estimate, we mean that 1,000 different adsorption energy predictions for the same system were used post-training to calculate a mean prediction \(\mu\) and uncertainty estimate \(\sigma\). For each fully connected layer of our CGCNN model, 5% of the nodes were stochastically dropped out for each MC dropout sample. A dropout rate of 5% was chosen because it results in a negligible reduction in model accuracy on the test set, **Fig. S1**. _Evidential Regression:_ Evidential regression estimates uncertainty by modifying both the loss function and model architecture of a regression model [50, 51]. Evidential regression does not require sampling predictions post-training to estimate the uncertainty, unlike \(k\)-fold ensembling and MC dropout. All the information needed to make an uncertainty estimate of a prediction is outputted by the model at prediction time. Evidential regression uses an evidential loss function \(L_{i}(w)\) such that the model learns the predictive uncertainty while it minimizes the predictive error during training. \[\text{L}_{i}(w)=L_{i}^{NLL}(w)+\lambda L_{i}^{R}(w) \tag{1}\] Here \(w\) refers to the model weights that are optimized during training, \(L_{i}^{NLL}(w)\) refers to the negative log-likelihood (\(NLL\)) of the evidential loss function, and \(L_{i}^{R}(w)\) refers to the regularization term of the evidential loss function. These terms are defined as follows: \[\text{L}_{i}^{NLL}(w)=\frac{1}{2}\log\left(\frac{\mu}{\nu}\right)- \alpha\log(\Omega) \tag{2}\] \[+\left(\alpha+\frac{1}{2}\right)\log((y_{i}-\nu)^{2}\nu+\Omega)\] \[+\log\left(\frac{\Gamma(\alpha)}{\Gamma\left(\alpha+\frac{1}{2} \right)}\right)\] \[\text{L}_{i}^{R}(w)=|y_{i}-\nu|\cdot(2\nu+\alpha) \tag{3}\] Here \(\Omega=2\beta(1+\nu)\) and the parameters \(\nu\), \(\nu\), \(\alpha\), and \(\beta\) are the evidential distribution parameters. The symbol \(\Gamma\) refers to the gamma function. The variable \(\lambda\) is a scalar hyperparameter that controls the degree of regularization introduced by the term \(L_{i}^{R}(w)\). We modified the CGCNN model architecture to predict these four parameters for each adsorbate/alloy system. The parameter \(\gamma\) represents the predicted target property (here the adsorption energy of a system). The parameters \(\nu\), \(\alpha\), and \(\beta\) are related to the aleatoric (\(\sigma_{a}\)) and epistemic (\(\sigma_{e}\)) uncertainties of each prediction: \[\sigma_{a}=\frac{\beta}{\alpha-1} \tag{4}\] \[\sigma_{e}=\frac{\beta}{\nu(\alpha-1)} \tag{5}\] Whereas \(k\)-fold ensembling and MC dropout provide one uncertainty estimate for each prediction, evidential regression provides two estimates corresponding to either aleatoric uncertainty or epistemic uncertainty. For this experiment, we focus on the epistemic uncertainty of evidential regression because epistemic uncertainty is related to the uncertainty of the ML model and is a reducible uncertainty, whereas aleatoric uncertainty is a type of stochastic, irreducible uncertainty [58]. Both aleatoric and epistemic uncertainty estimates across values of \(\lambda\) are included in **Fig. S2 and Fig. S3** of the SI for completeness. The regularization term \(L_{i}^{R}(w)\) inflates the uncertainty estimate of the adsorption energy predictions based on the absolute residual error \(|y_{i}-\nu|\), and \(\lambda\) weights the magnitude of this inflation. Because \(\lambda\) was fixed before training the model and inflates the uncertainty of the predictions, evidential regression gives a tunable uncertainty estimate that grows or shrinks depending on the choice of \(\lambda\). _2.4 Uncertainty Quantification Metrics_ We compare the UQ methods in terms of five UQ metrics: accuracy, sharpness, dispersion, calibration, and tightness [14, 43]. These UQ metrics collectively provide a means of comparing the predictive accuracy, shape, and reliability of the uncertainty distribution between UQ methods. Ideally, a UQ method would be accurate, sharp, disperse, calibrated, and tight, **Fig. 2b**. A UQ method that expresses a prediction as a mean and the uncertainty as a standard deviation should give accurate predictions regardless of the uncertainty centered around the predictions. A UQ method is ideally sharp, implying that the method gives highly certain adsorption energy predictions on average. A UQ method is preferably disperse such that the uncertainty level of different test predictions is easily distinguished. Importantly, the UQ method should also be calibrated to allow reliable interpretation as a confidence interval. Lastly, a UQ method should be tight in that the predictive uncertainty interval should only be as large as necessary to capture the ground truth. _Accuracy:_ The mean absolute error (_MAE_), root mean squared error (_RMSE_), median absolute error (_MDAE_), mean absolute relative percent distance (_MARPD_), coefficient of determination (\(R^{2}\)), and the correlation coefficient (\(R\)) were used to quantitatively assess the accuracy of each UQ method. Because _MDAE_ uses the median of the absolute residual error distribution across predictions, it is less sensitive to outliers compared to the _RMSE_. _MAE_ and _MARPD_ are formulated as means, so these metrics have some sensitivity to outliers. _MARPD_ is a normalized measure of accuracy [14]: \[\text{MARPD}=\frac{1}{N}\sum_{i=1}^{N}100*\frac{\left|\hat{y}_{i}-\ y_{i}\right|}{ \left|\hat{y}_{i}\right|+\left|y_{i}\right|} \tag{6}\] where \(\hat{y}_{i}\) is the \(i^{\text{th}}\) adsorption energy prediction from the UQ method applied onto our CGCNN model and \(y_{i}\) is the true adsorption energy of the \(i^{\text{th}}\) adsorbate/alloy system as predicted by DFT. _Sharpness:_ Sharpness (_Sha_) was expressed as [14]: \[\text{Sha}=\sqrt{\frac{1}{N}\sum_{i=1}^{N}\sigma_{i}^{2}} \tag{7}\] where \(N\) is the total number of data points and \(\sigma_{i}^{2}\) is the squared uncertainty of the \(i^{\text{th}}\) adsorption energy prediction. _Dispersion:_ The dispersion of uncertainty across predictions for each UQ method was measured by constructing a box plot for each method and measuring the interquartile range (_IQR_). \[\text{IQR}=Q3-Q1 \tag{8}\] _Q3_ is the third quartile corresponding up to the 75\({}^{\text{th}}\) percentile and \(Q\) is the first quantile corresponding up to the 25\({}^{\text{th}}\) percentile. Whiskers are calculated based on 1.5\({}^{\times}\)_IQR_ below the first quartile and above the third quartile, respectively. Each box plot was overlaid with a violin plot, which provides a smooth kernel density estimate of the distribution shape such that each distribution can be visually inspected similarly to a histogram. Each kernel density estimate was performed using Scott's rule for calculating the estimator bandwidth [59]. Coefficient of variation (\(Cv\)) was calculated to measure dispersion as well. While _IQR_ quantifies the spread of a distribution bulk on an absolute scale, \(Cv\) quantifies the spread of a distribution relative to its mean. These metrics in tandem allow for a more robust analysis of dispersion than either alone. Coefficient of variation was expressed as: \[\text{Cv}=\frac{\sigma_{\sigma}}{\mu_{\sigma}} \tag{9}\] where \(\sigma_{\sigma}\) and \(\mu_{\sigma}\) respectively are the standard deviation and mean of the uncertainty distribution. We include Bessel's correction in the calculation of \(\sigma_{\sigma}\). _Calibration:_ Calibration formally refers to how well the uncertainty estimate represents the true correctness likelihood of a prediction [38, 40, 41, 42, 43]. A calibrated ML model is a model whose uncertainty estimates should be comparable with its predictive error residuals [14]. In other words, a calibrated model should often be highly certain for highly accurate predictions and vice versa. Quantitatively, a calibrated model means that the uncertainty magnitude \(\sigma\) of an adsorption energy prediction \(\mu\) should often be comparable in magnitude to the residual predictive error (\(y-\mu\)), where \(y\) is the ground-truth adsorption energy of the system the model tries to correctly predict. Calibration of a model was assessed by constructing a calibration curve, **Fig. 2b**. A calibration curve displays the true frequency of points in each confidence interval relative to the predicted fraction of points in that interval [60]. For calibration curve construction, a quantile-based method was used [55]. Programmatically, this task was accomplished in the same way as explained by Tran _et al_[14]. The residual predictive error was normalized by the uncertainty of that prediction. All the normalized residual errors of all the predictions were combined to construct a distribution. If this distribution was comparable to a unit Gaussian distribution, then the uncertainty across all the predictions was said to be calibrated on average. In other words, the uncertainty across all predictions is calibrated on average if the normalized residual predictive error follows the probability density function of a unit Gaussian distribution: \[\Phi\left(z=\frac{y-z}{\sigma}\right)=\frac{e^{-\frac{z^{2}}{2}}}{\sqrt{2\pi}} \tag{10}\] where \(z\) is the normalized residual predictive error. This distribution-specific measure of calibration was applied to all UQ methods tested. To better interpret this procedure with intuition, consider the case where an adsorption energy prediction is highly accurate but has high uncertainty. In this case, the normalized residual error (\(z\)) is close to zero because the numerator is small; likewise, the denominator (i.e., uncertainty) is sufficiently large. It is desirable to have a normalized residual less than or equal to unity for predictions because this implicitly shows that the uncertainty interval is large enough to enclose the ground truth adsorption energy. The deviation of the normalized residual error from unit Gaussian behavior is what produces a nonideal calibration curve, which is constructed by defining quantiles and comparing the proportion of data inside each quantile between distributions. Every calibration curve shown herein visually represents the average calibration, which refers to calibration across the entire adsorption energy test set. However, randomly selected subgroups of predictions and even individual predictions should be calibrated as well [55, 60]. We refer to the global scope of calibration across all possible subgroups as the calibration density; to the best of our knowledge, we have not seen this term used in the literature. We use this term to better intuitively emphasize that any stochastic subselection of adsorption energy predictions across varying scales should be calibrated, that is, any subselection of predictions should ideally produce a normalized residual error distribution that is unit Gaussian, **Fig. S4**. If this expectation is met, then we say that a UQ method has effective calibration density. Calibration density is a related concept to individual uncertainty estimates in that it conceptually represents the map of calibration at all sample sizes and quantifies the deviation in the average uncertainty from individual estimates [61]. Calibration density was assessed via adversarial group calibration [55, 61]. Ten subgroups of predictions were drawn across different subgroup sizes (e.g., ten subgroups that each represent 10% of the test set, 20%, 30%, etc.). Within each subgroup, predictions were drawn without replacement. For each subgroup size, the ten subgroups were compared in an adversarial fashion. By adversarial, we mean that the worst performing subgroup of the ten compared was chosen as a worst-performing estimate of the calibration density. For each subgroup size, a hundred trials of picking ten subgroups at that size was performed. Using these trials, a mean worst-performing estimate of the calibration density was calculated for each subgroup size, as well as standard error bars **(Fig. 8)**. _Tightness:_ It is desirable to score the quality of the uncertainty intervals irrespective of calibration. For example, assessing calibration might indicate that an small uncertainty interval could be sufficient to capture the in-domain ground truth across most predictions, but a UQ method might often be returning unnecessarily larger intervals in practice. The uncertainty intervals should only be as large as necessary to capture the ground truth, so scoring metrics assess what we herein refer to as the tightness of a UQ method. Tightness was measured by computing a negatively oriented mean interval score [39]. An increasingly lower score refers to an increasingly tight and appropriate uncertainty estimate, **Fig. 2c**. This mean interval score rewards itself for smaller uncertainty intervals that capture the ground truth but punishes itself for smaller uncertainty intervals that do not. Across all predictions, we drew uncertainty intervals across a ground-truth coverage of 1% to 99% in 1% increments. The interval length corresponds to the coverage of our Gaussian distribution assumption (e.g., \(\mu\pm 1\sigma\) is a coverage of \(\sim\)68%, \(\mu\pm 2\sigma\) is a coverage of \(\sim\)95%, etc.). For each prediction, we computed a mean interval score across all coverage levels, **Fig. 2b**. A mean interval score for the entire test set was computed by calculating a mean of these mean scores. ### Scalar Recalibration The UQ methods tested were recalibrated [61] such that the uncertainty interval of \(\mu\pm 3\sigma\) more reliably behaves as a confidence interval and encloses the ground truth, which refers to the target property. An interval of \(\pm\) 3\(\sigma\) was chosen because such an interval provides 99.7% confidence under a perfect calibration assumption. Scalar recalibration was used, which involves multiplying all the uncertainty estimates \(\sigma\) across all adsorption energy predictions by a constant [56]. The constant was chosen via a black-box optimization algorithm that attempts to minimize the miscalibration area for each UQ method, **Table S4**. Brent's method was used for this recalibration method because--unlike Platt scaling [62] for example--it is a nonparametric method that makes no assumptions about the shape of the calibration curve(s) to be recalibrated [63]. The scikit-learn implementation of Brent's method was used [64]. ## 3 Results & Discussion We assess the accuracy of each UQ method applied onto our CGCNN model by comparing parity plots and different quantitative metrics of accuracy, **Fig. 3**. The three UQ methods have similar distributions of adsorption energy predictions. By all quantitative measures, evidential regression outperforms the other UQ methods in accuracy. Evidential regression has an \(R^{2}\) of 0.902 (**Fig. 3c**), whereas MC dropout has 0.893 (**Fig. 3b**) and 5-fold ensemble has 0.890 (**Fig. 3a**). Evidential regression has the lowest _MDAE_ of 0.391 eV, suggesting that this UQ method is the most accurate on non-outlier data. Generally, 5-fold ensemble and MC dropout perform similarly across all quantitative metrics of accuracy. Although each UQ method (**Fig. 3a-c**) differs in its approach for estimating the uncertainty--ensembling involves segregating the training data, MC dropout involves modifying the model architecture, and evidential regression requires modifying both the model architecture and optimization--the accuracy is quite similar for all three methods. To explore why, we examine if the models are overfit or underfit. Consider MC dropout; this method is used to prevent model overfitting as specified by the dropout rate hyperparameter [65]. A higher dropout rate results in dropping out more nodes, thereby resulting in larger adjustments away from overfitting. However, we do not observe improvements in test accuracy as dropout rate increases, suggesting that the baseline CGCNN model that we apply UQ methods onto is not overfitting to the training data--but rather underfitting relative to the chemical space represented by the training data, **Fig. S1**. A plausible explanation for the underfitting of the three UQ methods with a CGCNN is that the OC20 dataset is very sparse; roughly 0.07% of possible calculations were performed when considering the dataset constraints on adsorbates, surfaces, and bulk compositions [33]. Consequently, our CGCNN model architecture may underfit the chemical space represented by the full IS2RE training dataset. To achieve higher accuracy, more state-of-the-art deep neural network models and Figure 3: **Parity plots used to assess accuracy of adsorption energy predictions using CGCNN.** Accuracy results shown for **(a)** 5-fold ensemble, **(b)** 1,000-sample MC dropout, and **(c)** evidential regression with regularization weight \(\lambda=0.05\). Hexagonal binning was used. Dashed line (red) is the parity line. The mean absolute error (MAE), root mean squared error (RMSE), median absolute error (MDAE), mean absolute relative percent distance (MARPD), coefficient of determination (\(R^{2}\)) and correlation coefficient (\(R\)) are reported inset. Histograms of the model predictions are shown on the same scale inset (100 bins) with the mean and standard deviation of the histogram given inset. representations are needed, which is an area of on-going research [12]. We explored the sensitivity of the hyperparameter choice \(\lambda\) for evidential regression. **Table S2** contains the test MAE results for evidential regression across varying values of hyperparameter \(\lambda\). The test MAE across \(\lambda\) fluctuated at most by 0.48% from any given measurement, demonstrating that \(\lambda\) can be used to tune the size of the uncertainty intervals for this dataset and model without appreciable changes in the average accuracy of adsorption energy predictions. Based on this sensitivity analysis, we chose \(\lambda\) = 0.05 for all subsequent experiments because this regularization weight gives the most conservative uncertainty intervals of the non-zero weights tested, **Fig. S3b**. We construct violin box plots to assess both the dispersion and sharpness of each UQ method, as displayed in **Fig. 4**. High dispersion is desirable because one can define heuristics or rules based on the uncertainty to select and study catalysts for further study. For example, one might want to segregate very uncertain adsorbate/alloy systems from the rest for further study, but there may not be very uncertain systems to segregate if the dispersion is small. MC dropout is the most disperse UQ method on an absolute scale based on having the largest \(IQR\). 5-fold ensembling is the least disperse based on having the smallest \(IQR\); for this UQ method, defining uncertainty heuristics to select adsorbate/alloy systems for further study could be difficult--at least if one wants to select systems from the bulk of the uncertainty distribution. Evidential regression has dispersion in-between MC dropout and 5-fold ensemble in terms of \(IQR\), although we found the dispersion of evidential regression to vary substantially with varying values of \(\lambda\), **Fig. S3b**. The coefficient of variation analysis demonstrates that the evidential regression results are the most disperse relative to the distribution mean. Evidential regression has a \(Cv\) of 2.13, whereas 5-fold ensemble and MC dropout have a \(Cv\) of 0.786 and 0.779, respectively. Although MC dropout is the most disperse in absolute terms of the distribution spread, the dispersion of values around the distribution mean is poor. We conclude that evidential regression is the most disperse UQ method based on \(IQR\) and \(Cv\) collectively. Of the methods considered, evidential regression is the least sharp (i.e., has the highest sharpness value); thus, for this dataset evidential regression gives the least confident adsorption energy predictions on average. Uncertainty estimates of the adsorption energy predictions can range from 0 eV upwards for each UQ method, so we expected MC dropout to be the least sharp due to having the largest spread as measured by \(IQR\), but this is not the case. We observed that evidential regression gives increasingly large--sometimes even enormous--uncertainty estimates for outliers as \(\lambda\) increases, which skews the sharpness. Whereas evidential regression is the least sharp (0.780 eV), 5-fold ensemble is the sharpest (0.168 eV). Therefore, the 5-fold ensemble gives the most confident adsorption energy predictions on average. We summarize \(IQR\), \(Cv\), and sharpness for each UQ method in **Table S1**. Dispersion as measured by \(Cv\) for the UQ methods does not change after scalar recalibration. Before recalibration, MC dropout is the most disperse as measured by \(IQR\), but this UQ method has poor dispersion as measured by \(Cv\). While evidential regression does not have a better \(IQR\) before recalibration, this method is generally more disperse before and after recalibration when \(IQR\) and \(Cv\) are collectively considered. The effects of recalibration are discussed more below. Although it is important for a UQ method to make confident predictions on average, one needs to ensure that the associated uncertainty estimate \(\sigma\) for each adsorption energy prediction are neither overconfident nor underconfident such that we can interpret the uncertainty estimate similarly to that of a confidence interval--an interval that reliably suggests where the actual ground-truth adsorption energy could exist. In Fig. 4: **Violin box plots to assess dispersion and sharpness for each uncertainty distribution.** Accuracy results shown for **(a)** 5-fold ensemble, **(b)** 1,000-sample MC dropout, and **(c)** evidential regression with \(\lambda\) = 0.05. Shown inside each box plot: \(Qt\) (\(25^{\text{th}}\) percentile), \(Q2\) (\(50^{\text{th}}\) percentile), \(Q3\) (\(75^{\text{th}}\) percentile), and sharpness (dotted red line). Dispersion as measured by \(IQR\) and \(Cv\) is reported in **Table S1**. Outliers are not shown for visual clarity of the bulk distribution but are given in **Fig. S5**. other words, it is important to ensure that the UQ method is calibrated. We assess average calibration across the test set in **Fig. 5** before scalar recalibration. Each calibration curve has an associated miscalibration area, which refers to the area between any given calibration curve and the diagonal. A higher miscalibration area means worse model calibration (i.e., the reliability of using an uncertainty estimate as a confidence interval is weakened). The average calibration between the three UQ methods tested significantly varies, **Fig. 5a**. The data shows that MC dropout provides the most calibrated adsorption energy predictions on average by having the smallest miscalibration area (0.20). Oppositely, 5-fold ensemble gave the least calibrated adsorption energy predictions on average by having the largest miscalibration area (0.38). With a miscalibration area of 0.34, evidential regression performed similarly to the 5-fold ensemble. We observe a noticeable change in the degree of calibration for MC dropout with an increasing number of samples taken to construct the uncertainty estimate, **Fig. 5b**. 5-sample uncertainty estimates are poorly calibrated, giving a miscalibration area of 0.36. However, 1,000-sample uncertainty estimates reveal that MC dropout is the most calibrated UQ method tested before recalibration. The takeaway is straightforward; one needs to gather enough samples for the uncertainty estimates of MC dropout to converge. For our experimental setup, we found that a sample size of 50 nearly converges MC dropout, **Fig. 5b**. Herein, we report the uncertainty of each adsorption energy prediction using a sample size of 1,000 to ensure convergence. The data in **Fig. 5c** demonstrates the effect of varying \(\lambda\) on the average calibration of evidential regression. Values of 0.0, 0.05, 0.1, 0.15, and 0.2 for \(\lambda\) were tested. Evidential regression becomes monotonically more calibrated with increasing \(\lambda\) for the values tested. For \(\lambda\) values above 0.2, the model struggled to converge onto a finite uncertainty estimate, which we discuss in SI. Interestingly, the calibration curves between all UQ methods were largely of a similar character--most remain below the diagonal. Because of the method of construction, calibration curves below the diagonal indicate overconfident adsorption energy predictions. It is known that increasing \(\lambda\) generally inflates the uncertainty estimate of a prediction [50], so it is expected that the evidential regression calibration will improve as overconfident, small uncertainty estimates become appropriately inflated. If a UQ method does not give calibrated and therefore reliable materials predictions as implemented, the method can be recalibrated. The results in **Fig. 6** show the associated calibration curves on the test set before and after recalibrating each UQ method with scalar recalibration. In general, all three UQ methods saw a sizable decrease in average miscalibration area after recalibration. The data in **Fig. 6a** shows that 5-fold ensembling gives the most calibrated adsorption energy predictions on average based on having the smallest miscalibration area. Despite the highest miscalibration area after recalibration, MC dropout as shown in **Fig. 6b** still gives competitively calibrated adsorption energy predictions on average. The calibration performance of evidential regression displayed in **Fig. 6c** is close to that of MC dropout. All three UQ methods give both Fig. 5: **Calibration curves to assess average calibration for each UQ method.****(a)** Average calibration comparison between 5-fold ensemble, 1,000-sample MC dropout, and evidential regression (\(\lambda\) = 0.05). **(b)** The effect of sample size on average calibration of MC dropout. **(c)** The effect of hyperparameter \(\lambda\) on average calibration of evidential regression (epistemic uncertainty). overconfident and underconfident uncertainty estimates on average after recalibration, but the degree of miscalibration is less severe. All UQ methods we trained have average calibration curves close to the diagonal for the test set after recalibration, which demonstrates that each UQ method is well-calibrated on average for many adsorption energy predictions. Having high calibration on average is useful because selecting adsorbate/alloy systems for further study based on the adsorption energy uncertainty (e.g., in active learning workflows) should be more reliable compared to if the model is highly miscalibrated. **Table 1** reports the mean interval score to access the tightness of each UQ method before and after recalibration. Before recalibration, evidential regression gives the most appropriately tight uncertainty intervals on average despite not being the most calibrated UQ method. For the predictions that happen to be calibrated, the associated uncertainty intervals of evidential regression are the leanest on average. After recalibration, 5-fold ensemble is the tightest UQ method on average. UQ metrics holistically provide a means of comparing the shapes of uncertainty distributions across methods. For example, dispersion quantifies the spread of uncertainty, and sharpness quantifies a notion of average uncertainty. When considering these metrics in their entirety, no UQ method is clearly advantageous before recalibration. MC dropout is the most calibrated on average which might lead one to believe that this method is superior, but evidential regression outperforms the secondary criteria of accuracy, dispersion, and tightness. After recalibration, all UQ methods are comparably calibrated on average, but evidential regression outperforms in measures of accuracy and dispersion. We emphasize that the UQ metrics discussed provide a portfolio from which to holistically compare methods to robustly assess trustworthiness; any single metric on its own can give a misleading conclusion of trustworthiness. Thus far, we have compared three UQ methods in terms of the five UQ metrics for our challenging benchmark, the OC20 dataset. To demonstrate how these modeling concepts can be used in practice, we report our uncertainty-guided enumeration of materials for a hydrogen adsorbate in **Fig. 7**. We subselect 609 hydrogen adsorbate/catalyst systems from the OC20 Val-ID test dataset used to assess each UQ method and filter these systems based on defined search criteria. Our search criteria for enumerating materials involves defining an adsorption energy range and an uncertainty limit to select systems. For our case study, we select systems with hydrogen adsorption energies in the range of -0.1 to 0.1 eV. Such a range may be useful, for example, for studying materials for the hydrogen evolution reaction because hydrogen adsorption energy is a descriptor of \begin{table} \begin{tabular}{|c|c|c|c|} \hline **UQ Method** & **5-fold** & **1000-sample** & **Evidential** \\ & **Ensemble** & **MC Dropout** & **Regression** \\ & & & **(\(\lambda\) = 0.05)** \\ \hline **Interval Score** & 5.307 & 4.131 & 3.901 \\ \hline **Interval Score** & 2.907 & 3.472 & 3.550 \\ **(Recalibrated)** & & & \\ \hline \end{tabular} \end{table} Table 1: Mean interval score for each UQ method. A lower score indicates better tightness. Figure 6: Calibration curves to assess recalibration using scalar recalibration. Calibration curves on the test set are shown for **(a)** 5-fold ensemble, **(b)** 1,000-sample MC dropout, and **(c)** evidential regression (\(\lambda\) = 0.05) before and after recalibration. The dashed red line denotes perfect average calibration. The miscalibration area before and after recalibration is shown inset. catalytic activity [66, 67]. From these systems, we only choose those with an associated uncertainty \(\sigma\) of 0.05 eV or smaller. This adsorption energy range was chosen because each UQ method has inaccuracy due to the performance of the underlying CGCNN model. We decided to select a conservative, narrow search range to select only those predictions that should be confident. With our choice of \(\sigma\), a \(\mu\pm 3\sigma\) interval implies that the ground-truth adsorption energy prediction should often be at most 0.15 eV away from the ML prediction--the reliability of which becomes less consistent with larger miscalibration. Any material satisfying these criteria is enumerated. For this case study, our goal is not to rely on our ML adsorption energy predictions for the best chemical accuracy; rather, we demonstrate uncertainty estimates that are at least reliable and trustworthy enough to effectively enumerate materials. Additionally, from a modeling standpoint, we intend to demonstrate the increase in trustworthiness for predictions made with our model on the challenging OC20 dataset before and after recalibration. We observe improved reliability in the uncertainty estimates after recalibration. Before recalibration, none of the UQ methods enumerated materials that matched our search criteria. After recalibration, evidential regression gave 15 catalyst materials matching the search criteria (**Fig. 7**). Materials that have the same bulk composition differ by adsorption site and Miller index. Crystallographic differences between materials such as K\({}_{2}\) and K\({}_{8}\) are given in **Table S5**. A UQ method should propose materials that match the search criteria for an application; however, these proposals should be honest. Six of these materials' predictions are honest in that their associated uncertainty interval of \(\mu\pm 3\sigma\) successfully captures the ground truth values. The rest of the proposed materials were dishonest. This lackluster overall screening performance is to be expected using this challengingly sparse OC20 dataset. First and foremost, the trained CGCNN model is not highly accurate as measured by MAE, so many of the adsorption energy predictions are far away from the ground truth values before even applying the UQ methods. A more accurate model would place the predictions closer to the ground truth, demonstrating the importance of recent efforts to develop even more accurate deep NN models [28, 32, 35, 36]. Given that the OC20 dataset is challenging, we note that the CGCNN architecture may have better accuracy on less sparse, narrowly selected datasets, as we observe in **Table S3**. For evidential regression (\(\lambda=0.05\)), we report the element-wise sharpness and accuracy as measured by MAE for catalyst materials in **Fig. S7**, finding that periodic groups 3-5 are particularly uncertain and inaccurate. Additionally, we report uncertainty distributions of evidential regression on a per-adsorbate basis in **Fig. S8**, finding that a handful of adsorbates (e.g., *NO\({}_{2}\)NO\({}_{2}\)) are particularly uncertain for this dataset, model, and UQ method. An improvement in model accuracy should plausibly make any given search criteria more forgiving--assuming that the UQ method applied onto the model is calibrated. We deduce this point from the effective calibration assumption that necessitates that the uncertainty \(\sigma\) of any prediction is probabilistically likely to grow or shrink with the accuracy of that prediction. Nevertheless, it is noteworthy that we could identify trustworthy materials predictions as shown in **Fig. 7**, which highlights the importance of recalibration. From a purely data-driven viewpoint, we demonstrate that scalar recalibration did improve a UQ method's ability to enumerate DFT-accurate Fig. 7: **Materials compositions from the OC20 dataset that satisfy the enumeration criteria formulated for a hydrogen adsorbate.** Materials were selected using the evidential regression (\(\lambda\) = 0.05) uncertainty estimate after scalar recalibration. SrzCd4Pdz (Sr: Green, Cd: Purple, Pd: Grey) and Cs4 (Cs: Teal) are visualized for demonstration. Hydrogen adsorbate is not shown. Materials of the same bulk composition differ by adsorption site and Miller index. Details about material differences are given in **Table S5**. adsorption energies of these materials, although we acknowledge that many of these materials found in the OC20 dataset are not commonly used catalysts for reactions involving hydrogen. In a real catalyst screening study, one might consider additional criteria outside of adsorption energies, such as stability or selectivity. Recalibration can be considered as a postprocessing step for a UQ method that can improve an initially poor result, which we demonstrate in this discussion. However, recalibration may not be necessary and may even introduce some drawbacks to the experimental design, depending on the method. For scalar recalibration, all uncertainty estimates \(\sigma\) are multiplied by a constant, which has the effect of multiplying the sharpness by said constant. Because sharpness is akin to the average uncertainty of a prediction, scalar recalibration scales the average magnitude of uncertainty, thereby making it more difficult to effectively define narrow \(\sigma\) screening criteria if the scaling enlarges the sharpness. For our demonstration of enumerating hydrogen adsorbate/cataly systems, we rationalize the increase in the number of enumerated systems after recalibration based on two reasons. First, each UQ method makes overconfident adsorption energy predictions, so inflating all the uncertainty estimates \(\sigma\) for any UQ method by a scalar made the model more appropriately calibrated--thereby improving the confidence interval reliability of each uncertainty estimate as supported by **Fig. 7**. Secondly, we define our uncertainty search criteria as \(\sigma\) = 0.05 eV. Evidential regression (\(\lambda\) = 0.05) shown in **Fig. 4** is more forgiving for this screening criterion in that the uncertainty estimates \(\sigma\) are much less than 0.05 eV before recalibration such that inflating the uncertainty intervals did not exceed the \(\sigma\) = 0.05 criterion after recalibration, unlike many systems for 5-fold ensemble and MC dropout. Although the average recalibration results of **Fig. 6** appear excellent and the recalibration did improve the results of our material enumeration case study for the hydrogen adsorbate, a more nuanced analysis of recalibration on the scale of individual predictions was performed by estimating the calibration density. The adversarial group calibration results in **Fig. 8** allow us to estimate the upper bound miscalibration associated with the calibration density across group sizes. We only report here adversarial calibration of group sizes up to 2% of the test set size because we otherwise observe asymptotic miscalibration. Generally, the miscalibration across UQ methods monotonically improves across group sizes after recalibrating. This consistency in behavior is true near the limiting case of a group size of zero, that is, individual calibration of predictions. However, directly assessing individual calibration is often unverifiable for finite dataset sizes, despite being a strict calibration constraint in the literature [61]. Although we previously calculate average calibration curves across the entire test set, a UQ method is truly calibrated if all individual predictions are themselves calibrated. The recalibration results demonstrate significant improvement in the average calibration (i.e., calibration corresponding to a 100% group size), yet the limiting behavior near the origin in **Fig. 8** suggests that there is only a mild improvement in the individual calibration for the most miscalibrated predictions. Ultimately, the analysis in **Fig. 8** helps rationalize why many of the uncertainty intervals in the **Fig. 7** demonstration do not capture the ground truth. Moreover, we highlight a subtlety that the dramatic average recalibration improvements shown in **Fig. 6** are misleading and, in actuality, more modest, so the interpretation of average calibration needs to be handled carefully for applications. Fig. 8: **Adversarial group calibration to assess calibration density. The miscalibration area of the most miscalibrated group is shown for that corresponding group size. Shown for 5-fold ensemble (blue circle), 1,000-sample MC dropout (orange square), and evidential regression (\(\lambda\) = 0.05, green pentagon) before (dotted line) and after (solid line) scalar recalibration. Shaded regions represent the standard error. Trends continue to asymptote past 2.00% group size as shown in **Fig. S6**. ## 4 Conclusions We compare three UQ methods to frame future analyses in high-throughput materials discovery by training a state-of-the-art CGCNN on the challenging OC20 dataset. We base our comparison on the UQ metrics, which quantify not only the accuracy of predictions but also the size, spread, and reliability of the associated uncertainty intervals. Before recalibration, evidential regression is advantageously found to be the most accurate, disperse, and tight UQ method, but MC dropout is the most calibrated method on average. After recalibration, evidential regression is found to be the most accurate and disperse, as well as competitively calibrated. Additionally, evidential regression provides high utility by enabling tunable uncertainty estimates that output upon prediction time, unlike \(k\)-fold ensembles and MC dropout. This tunability allows for more flexibility in addressing application-specific shortcomings with sharpness, dispersion, and tightness. For future high-throughput studies using distribution-specific UQ, we recommend evidential regression because of its tunability, demonstrated trustworthiness, and computational tractability on this challenging dataset benchmark. Through the enumeration of OC20 materials in the case of a hydrogen adsorbate, scalar recalibration demonstrably enhances the trustworthiness of evidential regression. We conjecture based on the experimental results that effective recalibration is a promising knowledge gap to make distribution-specific UQ more accessible across materials studies. Neural networks can perform surprisingly poorly under domain shift [68], so it is unclear how each UQ method would perform across different model architectures and complex material spaces. With effective recalibration that has little negative effect on the UQ metrics, the choice of UQ method would be more arbitrary. Researchers would be able to prioritize choosing the method that is the most computationally tractable and recalibrate accordingly. The UQ metrics and analysis discussed can serve as a frame of reference for this endeavor. ## Associated Content The Supporting Information is available free of charge on the journal. Data and code supporting the results of this study are available through the following URL: [https://github.com/CGruich](https://github.com/CGruich). The Supporting Information contains details on UQ model dispersion and sharpness; the effect of dropout rate on the CGCNN; aleatoric and epistemic uncertainty analysis for hyperparameter \(\lambda\); supplementary adversarial group calibration analysis; material/adsorbate outliers. ## Author Information ### Corresponding Author *Email: [email protected] ### Author Contributions Conceptualization, C.G., B.R.G.; Methodology, C.G.; Software, C.G.; Formal analysis, C.G., V.M.; Investigation, C.G., V.M.; Resources, C.G., B.R.G.; Data Curation, C.G.; Writing - Original Draft, C.G.; Writing - Review & Editing, C.G., V.M., Y.W., B.R.G.; Visualization, C.G.; Supervision, B.R.G.; Project Administration, B.R.G.; Funding Acquisition, B.R.G. ### Notes The authors declare no competing financial interest. ## Acknowledgment C.G acknowledges support from the NSF Graduate Research Fellowship Program. B.R.G acknowledges support from the MICDE Catalyst Grant from the Michigan Institute for Computational Discovery and Engineering. This research used the National Energy Research Scientific Computing Center resources, a U.S. Department of Energy Office of Science User Facility, operated under Contract No. DE-AC02-05CH11231. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship under Grant No. DGE 1841052. ## Abbreviations ML, machine learning; CGCNN, crystal graph convolutional neural network; DER, deep evidential regression; OCP, Open Catalyst Project; OC20, Open Catalyst 20 Dataset; NN, neural network; DFT, density functional theory; MAE, mean absolute error.
2309.01488
On the use of Mahalanobis distance for out-of-distribution detection with neural networks for medical imaging
Implementing neural networks for clinical use in medical applications necessitates the ability for the network to detect when input data differs significantly from the training data, with the aim of preventing unreliable predictions. The community has developed several methods for out-of-distribution (OOD) detection, within which distance-based approaches - such as Mahalanobis distance - have shown potential. This paper challenges the prevailing community understanding that there is an optimal layer, or combination of layers, of a neural network for applying Mahalanobis distance for detection of any OOD pattern. Using synthetic artefacts to emulate OOD patterns, this paper shows the optimum layer to apply Mahalanobis distance changes with the type of OOD pattern, showing there is no one-fits-all solution. This paper also shows that separating this OOD detector into multiple detectors at different depths of the network can enhance the robustness for detecting different OOD patterns. These insights were validated on real-world OOD tasks, training models on CheXpert chest X-rays with no support devices, then using scans with unseen pacemakers (we manually labelled 50% of CheXpert for this research) and unseen sex as OOD cases. The results inform best-practices for the use of Mahalanobis distance for OOD detection. The manually annotated pacemaker labels and the project's code are available at: https://github.com/HarryAnthony/Mahalanobis-OOD-detection.
Harry Anthony, Konstantinos Kamnitsas
2023-09-04T09:51:33Z
http://arxiv.org/abs/2309.01488v1
On the use of Mahalanobis distance for out-of-distribution detection with neural networks for medical imaging ###### Abstract Implementing neural networks for clinical use in medical applications necessitates the ability for the network to detect when input data differs significantly from the training data, with the aim of preventing unreliable predictions. The community has developed several methods for out-of-distribution (OOD) detection, within which distance-based approaches - such as Mahalanobis distance - have shown potential. This paper challenges the prevailing community understanding that there is an optimal layer, or combination of layers, of a neural network for applying Mahalanobis distance for detection of any OOD pattern. Using synthetic artefacts to emulate OOD patterns, this paper shows the optimum layer to apply Mahalanobis distance changes with the type of OOD pattern, showing there is no one-fits-all solution. This paper also shows that separating this OOD detector into multiple detectors at different depths of the network can enhance the robustness for detecting different OOD patterns. These insights were validated on real-world OOD tasks, training models on CheXpert chest X-rays with no support devices, then using scans with unseen pacemakers (we manually labelled 50% of CheXpert for this research) and unseen sex as OOD cases. The results inform best-practices for the use of Mahalanobis distance for OOD detection. The manually annotated pacemaker labels and the project's code are available at: [https://github.com/HarryAnthony/Mahalanobis-OOD-detection](https://github.com/HarryAnthony/Mahalanobis-OOD-detection) Keywords:Out-of-distribution Uncertainty Distribution shift. ## 1 Introduction Neural networks have achieved state-of-the-art performance in various medical image analysis tasks. Yet their generalisation on data not represented by the training data - out-of-distribution (OOD) - is unreliable [12, 21, 33]. In the medical imaging field, this can have severe consequences. Research in the field of OOD detection [26] seeks to develop methods that identify if an input is OOD, acting as a safeguard that informs the human user before a potentially failed model prediction affects down-stream tasks, such as clinical decision-making - facilitating safer application of neural networks for high-risk applications. One category of OOD detection methods use an **external model for OOD detection**. These include using _reconstruction models_[1, 9, 20, 22, 27], which are trained on in-distribution (ID) data and assume high reconstruction loss when reconstructing OOD data. Some approaches employ a _classifier_ to learn a decision boundary between ID and OOD data [26]. The boundary can be learned in an unsupervised manner, or supervised with exposure to pre-collected OOD data [11, 25, 29, 31]. Other methods use _probabilistic models_[15] to model the distribution of the training data, and aim to assign low probability to OOD inputs. Another category are **confidence-based methods** that enable discriminative models trained for a specific task, such as classification, to estimate uncertainty in their prediction. Some methods use the network's softmax distribution, such as MCP [10], MCDropout [6] and ODIN [18], whereas others use the distance of the input to training data in the model's latent space [17]. A commonly studied method of the latter category is Mahalanobis distance [17], possibly due to its intuitive nature. The method has shown mixed performance in literature, performing well in certain studies [7, 14, 24, 32] but less well in others [2, 28, 30]. Previous work has explored which layer of a network gives an embedding optimal for OOD detection [3, 17]. But further research is needed to understand the factors influencing its performance to achieve reliable application of this method. This paper provides several contributions towards this end: * Identifies that measuring Mahalanobis distance at the last hidden layer of a neural network, as commonly done in literature, can be sub-optimal. * Demonstrates that different OOD patterns are best detectable at different depths of a network, implying that there is no single layer to measure Mahalanobis distance for optimal detection of _all_ OOD patterns. * The above suggests that optimal design of OOD detection systems may require multiple detectors, at different layers, to detect different OOD patterns. We provide evidence that such an approach can lead to improvements. * Created a benchmark for OOD detection by manually annotating pacemakers and support devices in CheXpert [13]. ## 2 Methods **Primer on Mahalanobis score, \(\mathcal{D}_{\mathcal{M}}\):** Feature extractor \(\mathcal{F}\) transforms input \(\mathbf{x}\) into an embedding. \(\mathcal{F}\) is typically a section of a neural network pre-trained for a task of interest, such as disease classification, from which feature maps \(h(\mathbf{x})\) are obtained. The mean of feature maps \(h(\mathbf{x})\) are used as embedding vector \(\mathbf{z}\): \[\mathbf{z}\in\Re^{M}=\frac{1}{D^{2}}\sum_{D}\sum_{D}h(\mathbf{x}),\quad\text{ where }h(\mathbf{x})\in\Re^{D\times D\times M} \tag{1}\] for \(M\) feature maps with dimensions \(D\!\!\times\!D\). Distance-based OOD methods assume the embedded in-distribution (ID) and OOD data will deviate in latent space, ergo being separable via a distance metric. In the latent space \(\Re^{M}\), \(N_{c}\)_training_ data points for class c have a mean and covariance matrix of \[\mathbf{\mu_{\mathrm{c}}}=\frac{1}{N_{c}}\sum_{i=1}^{N_{c}}\mathbf{z_{i_{c}}},\ \ \mathbf{\Sigma_{\mathrm{c}}}=\frac{1}{N_{c}}\sum_{i=1}^{N_{c}}(\mathbf{z_{i_{c}}}-\bm {\mu_{\mathrm{c}}})\ (\mathbf{z_{i_{c}}}-\mathbf{\mu_{\mathrm{c}}})^{T} \tag{2}\] where \(\mathbf{\mu_{\mathrm{c}}}\) is vector of length \(M\) and \(\mathbf{\Sigma_{\mathrm{c}}}\) is a \(M\)\(\times\)\(M\) matrix. Mahalanobis distance \(\mathcal{D_{M_{c}}}\) between embedding \(\mathbf{z}\) of a _test_ data point and the _training_ data of class c can be calculated as a sum over \(M\) dimensions [19]. The **Mahalanobis score**\(\mathcal{D_{M}}\) for OOD detection is defined as the minimum Mahalanobis distance between the test data point and the class centroids of the training data, \[\mathcal{D_{M_{c}}}(\mathbf{x})=\sum_{i=1}^{M}(\mathbf{z}-\mathbf{\mu_{\mathrm{c }}})\ \mathbf{\Sigma_{\mathrm{c}}}^{-1}\ (\mathbf{z}-\mathbf{\mu_{\mathrm{c}}})^{T},\ \ \ \ \ \mathcal{D_{M}}(\mathbf{x})=\min_{c}\left\{\mathcal{D_{M_{c}}}(\mathbf{x})\right\}. \tag{3}\] Threshold \(t\), chosen empirically, is then used to separate ID (\(\mathcal{D_{M}}\)\(<\)\(t\)) from OOD data (\(\mathcal{D_{M}}\)\(>\)\(t\)). Score \(\mathcal{D_{M}}\) is commonly measured at a network's last hidden layer (LHL) [2, 4, 5, 23, 28, 30]. To analyse the score's effectiveness with respect to where it is measured, we extracted a separate vector \(\mathbf{z}\) after each network module (Fig. 1). Herein, a _module_ refers to a network operation: convolution, batch normalisation (BN), ReLU, addition of residual connections, pooling, flatten. Stats \(\mathbf{\mu_{\mathrm{c}}}^{\ell}\) and \(\mathbf{\Sigma_{\mathrm{c}}}^{\ell}\) (Eq. 2) of the training data were measured after each module \(\ell\), and for each input an OOD score \(\mathcal{D_{M}^{\ell}}\) was calculated per module (Eq. 3). **Weighted combination:** Weighted combination of Mahalanobis scores \(\mathcal{D_{M}^{\ell}}\), measured at different layers \(\ell\), was developed [17] to improve OOD detection: \[\mathcal{D_{M,\mathit{comb}}}(\mathbf{x})=\sum_{\ell}\alpha_{\ell}\ \mathcal{D_{M}^{\ell}}( \mathbf{x}), \tag{4}\] using \(\alpha_{l}\)\(\in\)\(\Re\) to down-weight ineffective layers. Coefficients \(\alpha_{l}\) are optimised using a logistic regression estimator on pre-collected OOD data [17]. **Fast gradient sign method (FGSM) [8, 17]:** Empirical evidence showed that the rate of change of Mahalanobis distance with respect to a small input perturbation is typically greater for ID than OOD inputs [17, 18]. Therefore, Figure 1: (Left) Method to extract embeddings after a network module. (Right) Mahalanobis score \(\mathcal{D_{M}}\) of an input to the closest training class centroid. perturbations \(\mathbf{x}^{\prime}=\mathbf{x}-\varepsilon\cdot\text{sign}(\nabla_{x}\mathcal{D}_{ \mathcal{M}}(\mathbf{x}))\) of magnitude \(\varepsilon\) are added to image \(\mathbf{x}\), to minimise distance \(\mathcal{D}_{\mathcal{M}_{c}}\) to the nearest class centroid. Mahalanobis score \(\mathcal{D}_{\mathcal{M}}(\mathbf{x}^{\prime})\) of the perturbed image \(\mathbf{x}^{\prime}\) is then used for OOD detection. **Multi-branch Mahalanobis (MBM):** During this work it was found that different OOD patterns are better detected at different depths of a network (Sec. 3). This motivated the design of a system with multiple OOD detectors, operating at different network depths. We divide a network into parts, separated by downsampling operations. We refer to each part as a _branch_ hereafter. For each branch \(b\), we combine (with summation) the scores \(\mathcal{D}_{\mathcal{M}}^{\ell}\), measured at modules \(\ell\!\in\!L_{b}\), where \(L_{b}\) is the set of modules in the branch (visual example in Fig. 5). For each branch, we normalise each score \(\mathcal{D}_{\mathcal{M}}^{\ell}\) before summing them, to prevent any single layer dominating. For this, the mean (\(\mu_{b}^{\ell}=\mathbb{E}_{\mathbf{x}\in X_{train}}[D_{M}^{\ell}(\mathbf{x})]\)) and standard deviation (\(\sigma_{b}^{\ell}=\mathbb{E}_{x\in X_{train}}[(D_{M}^{\ell}(\mathbf{x})-\mu_{b }^{\ell})^{2}]^{\frac{1}{2}}\)) of Mahalanobis scores of the _training data_ after each module were calculated, and used to normalise \(\mathcal{D}_{\mathcal{M}}^{\ell}\) for any _test_ image \(\mathbf{x}\), as per Eq. 5. This leads to a different Mahalanobis score and OOD detector per branch (4 in experiments with ResNet18 and VGG16). \[\mathcal{D}_{\mathcal{M},\text{\emph{branch-b}}}(\mathbf{x})=\sum_{\ell\in L_ {b}}\frac{\mathcal{D}_{\mathcal{M}}^{\ell}(\mathbf{x})-\mu_{b}^{\ell}}{\sigma _{b}^{\ell}}. \tag{5}\] ## 3 Investigation of the use of \(\mathcal{D}_{\mathcal{M}}\) with Synthetic Artefacts The abilities of Mahalanobis score \(\mathcal{D}_{\mathcal{M}}\) were studied using CheXpert [13], a multi-label collection of chest X-rays. Subsequent experiments were performed under three settings, summarised in Fig. 2. In the first setting, studied here, we used scans containing either Cardiomegaly or Pneumothorax. We trained a ResNet18 on 90% of these images to classify between the two classes (ID task), and held-out 10% of the data as ID test cases. We generated an OOD test set by adding a synthetic artefact at a random position to these held-out images. **Square artefact:** Firstly, grey squares, of sizes 10, 7.5 and 5 % of the image area, were introduced to create the OOD cases. We processed ID and OOD data, Figure 2: a) Visual and b) quantitative summary of the synthetic (setting 1) and real (setting 2 & 3) ID and OOD data used to evaluate OOD detection performance. measured their \(\mathcal{D_{M}}\) after every module in the network and plotted the AUROC score in Fig. 3. We emphasize the following observations. The figure shows that larger square artefacts are easier to detect, with this OOD pattern being easier to detect in earlier layers. Moreover, we observed that AUROC is poor at the last hidden layer (LHL), which is a common layer to apply \(\mathcal{D_{M}}\) in the literature [2, 4, 5, 23, 28, 30]. The performance of this sub-optimal configuration may be diverting the community's attention, missing the method's true potential. The results also show AUROC performance in general improves after a ReLU module, compared to the previous convolution and BN of the corresponding layer. Similar results were found with VGG16 but not shown due to space constraints. **Ring artefact**: The experiments were repeated with a white ring as the synthetic artefact, and results were compared with the square artefact (Fig. 4). The figure shows the AUROC for different OOD patterns peak at different depths of the network. The figure shows the layers and optimised linear coefficients \(\alpha_{l}\) for each artefact for \(\mathcal{D_{M,comb}}\) (Eq. 4), highlighting that the ideal weighting of distances for one OOD pattern can cause a degradation in the performance for another, there is no single weighting that optimally detects both patterns. As the types of OOD patterns that can be encountered are unpredictable, the idea of searching for an optimal weighting of layers may be ill-advised - implying a different application of this method is required. ## 4 Investigation of the use of \(\mathcal{D_{M}}\) with Real Artefacts To create an OOD benchmark, we manually labelled 50% of the frontal scans in CheXpert based on whether they had a) no support device, b) any support Figure 3: AUROC (mean of 3 seeds) for Mahalanobis score over the modules of ResNet18 for synthetic square artefacts of size 10% (purple), 7.5% (green) and 5% (blue) of the image. The module types of ResNet18 are visualised, showing AUROC is typically improved after a ReLU module. The downsample operations are shown by dashed grey lines. The AUROC at the last hidden layer (LHL) is highlighted in orange, exhibiting a comparatively poor performance. devices (e.g. central lines, catheters, pacemakers), c) definitely containing a pacemaker, d) unclear. This was performed because CheXpert's "support devices" class is suboptimal, and to separate pacemakers (distinct OOD pattern). Findings from the synthetic data were validated on two real OOD tasks (described in Fig.2). For the first benchmark, models were trained with scans with no support devices to classify if a scan had Pleural Effusion or not (ID task). Images containing pacemakers were then used as OOD test cases. For the second benchmark, models were trained on males' scans with no support devices to classify for Pleural Effusion, then females' scans with no support devices were used as OOD test cases. For both cases, the datasets were split using 5-fold cross validation, using 80% of ID images for training and the remaining 20% as ID test cases. **Where to measure \(\mathcal{D}_{\mathcal{M}}\):** Figure 5 shows the AUROC for unseen pacemaker and sex OOD tasks when \(\mathcal{D}_{\mathcal{M}}\) is measured at different modules of a ResNet18. The figure validates the findings on synthetic artefacts: applying \(\mathcal{D}_{\mathcal{M}}\) on the LHL can result in poor performance, and the AUROC performance after a ReLU module is generally improved compared to the preceding BN and convolution. Moreover, it shows that the unseen pacemaker and sex OOD tasks are more detectable at different depths of ResNet18 (modules 51 and 44 respectively). As real-world OOD patterns are very heterogeneous, this motivates an optimal OOD detection system having multiple detectors, each processing features of a network at different layers responsible for identifying different OOD patterns. **Compared methods:** The OOD detection performance of multi-branch Mahalanobis (MBM) was studied. MBM was also investigated using only distances after ReLUs, as experiments on synthetic OOD patterns suggested this may be beneficial. The impact of FGSM (Sec. 2) on MBM was also studied. This was compared to OOD detection baselines. The softmax-based methods used were MCP [10], MCDropout [6], Deep Ensembles [16] (using 3 networks per k-fold), Figure 4: AUROC (mean of 3 seeds) for Mahalanobis score over the modules of ResNet18 for synthetic grey square (purple) and white ring (orange) artefacts. The layers used for \(\mathcal{D}_{\mathcal{M},comb}\)[17] (Sec. 2) are highlighted in blue, and the weightings \(\alpha_{l}\) for each layer (Eq. 4) are shown on the right for each artefact. The results show the ideal weighting for one artefact causes a degradation in performance for another - implying there’s no one-fits-all weighting. ODIN [18] (optimising temperature \(\mathrm{T}\in[1,100]\) and perturbation \(\varepsilon\in[0,0.1]\)). The performance was also compared to distance-based OOD detection methods such as \(\mathcal{D}_{\mathcal{M}}\), \(\mathcal{D}_{\mathcal{M},comb}\) (\(\alpha_{l}=1\ \forall l\)), \(\mathcal{D}_{\mathcal{M}}\) with FGSM (using an optimised perturbation \(\varepsilon\in[0,0.1]\)) and \(\mathcal{D}_{\mathcal{M}}\) at the best performing network module. Performance of OOD methods for both ResNet18 and VGG16 are shown in Table 1. Results show that \(\mathcal{D}_{\mathcal{M},comb}\) without LHL outperforms the original weighted combination, showing that the LHL can have a degrading impact on OOD detection. MBM results for ResNet18 in Table 1 show that the OOD patterns are optimally detected at different branches of the network (branch 4 and 3 respectively), further motivating an ideal OOD detector using multiple depths for detecting different patterns. For VGG16 these specific patterns both peak in the deepest branch, but other patterns, such as synthetic squares, peak at different branches (these results are not shown due to space limits). MBM results show that if one could identify the optimal branch for detection of a specific OOD pattern, the MBM approach not only outperforms a sum of all layers, but also outperforms the best performing single layer for a given pattern in some cases. Deducing the best branch for detecting a specific OOD pattern has less degrees-of-freedom than the best layer, meaning an ideal system based on MBM would be easier to configure. The results also show MBM performance can be improved by only using ReLU modules, and optimised with FGSM. **Finding thresholds:** Using multiple OOD detectors poses the challenge of determining OOD detection thresholds for each detector. To demonstrate the potential in the MBM framework, a grid search optimised the thresholds for four OOD detectors of MBM using ReLU modules for ResNet18 trained on setting 3 (described in Fig.2). Thresholds were set to classify an image as OOD if any detector labeled it as such. Unseen pacemakers and unseen sex were used as OOD tasks to highlight that thresholds could be found to accommodate multiple OOD tasks. The performance of these combined OOD detectors was compared to \(\mathcal{D}_{\mathcal{M},comb}\) w/o LHL (\(\alpha_{l}=1\ \forall l\)) and \(\mathcal{D}_{\mathcal{M},comb}\) with optimised \(\alpha_{l}\) (Eq.4) where both require a single threshold, using balanced accuracy as the metric (table 2). Although optimising thresholds for all OOD patterns in complex settings would be challenging, these results show the theoretically attainable upper bound outperforms both single-layer or weighted combination techniques. Methods for configuring such multi-detector systems can be an avenue for future research. \begin{table} \begin{tabular}{c|c c c} \hline \hline \multirow{2}{*}{OOD detection method} & \multicolumn{3}{c}{OOD task (balanced accuracy \(\uparrow\))} \\ & \multicolumn{3}{c}{Both tasks} & Unseen sex & Pacemakers \\ \hline Mahal. score (equally weighted comb w/o LHL) & 67.64 & 64.63 & 70.37 \\ Mahal. score (weighted comb with optimised \(\alpha_{l}\)) & 68.14 & 64.89 & 70.90 \\ Multi-branch Mahal. (ReLU only) & **71.40** & **67.26** & **75.16** \\ \hline \hline \end{tabular} \end{table} Table 2: Balanced Accuracy for simultaneous detection of 2 OOD patterns, showing a multi-detector system can improve OOD detection over single-detector systems based on the optimal layer or optimal weighted combination of layers. ## 5 Conclusion This paper has demonstrated with both synthetic and real OOD patterns that different OOD patterns are optimally detectable using Mahalanobis score at different depths of a network. The paper shows that the common implementations using the last hidden layer or a weighted combination of layers are sub-optimal, and instead a more robust and high-performing OOD detector can be achieved by using multiple OOD detectors at different depths of the network - informing best-practices for the application of Mahalanobis score. Moreover, it was demonstrated that configuring thresholds for multi-detector systems such as MBM is feasible, motivating future work into developing an ideal OOD detector that encompasses these insights. #### 5.0.1 Acknowledgments. HA is supported by a scholarship via the EPSRC Doctoral Training Partnerships programme [EP/W524311/1]. The authors also acknowledge the use of the University of Oxford Advanced Research Computing (ARC) facility in carrying out this work ([http://dx.doi.org/10.5281/zenodo.22558](http://dx.doi.org/10.5281/zenodo.22558)).
2304.03288
VISHIEN-MAAT: Scrollytelling visualization design for explaining Siamese Neural Network concept to non-technical users
The past decade has witnessed rapid progress in AI research since the breakthrough in deep learning. AI technology has been applied in almost every field; therefore, technical and non-technical end-users must understand these technologies to exploit them. However existing materials are designed for experts, but non-technical users need appealing materials that deliver complex ideas in easy-to-follow steps. One notable tool that fits such a profile is scrollytelling, an approach to storytelling that provides readers with a natural and rich experience at the reader's pace, along with in-depth interactive explanations of complex concepts. Hence, this work proposes a novel visualization design for creating a scrollytelling that can effectively explain an AI concept to non-technical users. As a demonstration of our design, we created a scrollytelling to explain the Siamese Neural Network for the visual similarity matching problem. Our approach helps create a visualization valuable for a short-timeline situation like a sales pitch. The results show that the visualization based on our novel design helps improve non-technical users' perception and machine learning concept knowledge acquisition compared to traditional materials like online articles.
Noptanit Chotisarn, Sarun Gulyanon, Tianye Zhang, Wei Chen
2023-04-04T08:26:54Z
http://arxiv.org/abs/2304.03288v1
# VISHIEN-MAAT: Scrollytelling Visualization Design for Explaining ###### Abstract The past decade has witnessed rapid progress in AI research since the breakthrough in deep learning. AI technology has been applied in almost every field; therefore, technical and non-technical end-users must understand these technologies to exploit them. However existing materials are designed for experts, but non-technical users need appealing materials that deliver complex ideas in easy-to-follow steps. One notable tool that fits such a profile is scrollytelling, an approach to storytelling that provides readers with a natural and rich experience at the reader's pace, along with in-depth interactive explanations of complex concepts. Hence, this work proposes a novel visualization design for creating a scrollytelling that can effectively explain an AI concept to non-technical users. As a demonstration of our design, we created a scrollytelling to explain the Siamese Neural Network for the visual similarity matching problem. Our approach helps create a visualization valuable for a short-timeline situation like a sales pitch. The results show that the visualization based on our novel design helps improve non-technical users' perception and machine learning concept knowledge acquisition compared to traditional materials like online articles. keywords: Story synthesis, Scrollytelling, Visual storytelling, Visualizing deep learning, Learning science + Footnote †: journal: Visual Informatics ## 1 Introduction In the modern world, artificial intelligence (AI) is important and plays an important role in business, society, and everyday life. The main obstacle in AI adoption is that AI usually comes in the flavor of a complex and black-box solution, which makes users skeptical since it is difficult to understand its concept, process, and task. Hence, building trust in AI is key to successful AI adoption, and that is what explainable AI (XAI) aims to achieve through improving the transparency and understanding of machine learning (ML) models to create a safe and trustworthy ML model [13]. However, before XAI can be applied, one question arises: how does the ML model work and obtain the solution? This particular question is typically asked by customers, which are non-technical stakeholders of AI-related products. During the sales meeting, the business marketing team, which normally are non-technical users, struggles with this question because they have to convey the complex idea within the time limits, and the publicly available materials, like online articles and videos, are either too abstract to follow or too academic to comprehend. In such a situation, the material with the ability to control the pace of the narrative and details of the content is of utmost importance; meanwhile, the material also has to be intriguing and appealing to the users and break down complex ideas into easy-to-follow steps. One possible solution is storytelling visualization, a well-known and influential means of communicating messages and engaging audiences, suitable for non-technical users. One popular technique, called'scrollytelling', has recently gained traction as it engages readers more naturally and richly by unfolding an expressive visual story depending on their interactive inputs. It shows the changes in a website's content as users scroll through the page. The content usually consists of visual graphs with associated narrative writing, video/audio clips, and interactions [25]. Because scrollytelling gives the users complete control of both the pace and the details of the visualization, it is a versatile tool for explaining complex content to various audiences, especially for non-technical stakeholders. These properties make scrollytelling a practical tool for the business marketing team's situation to help customers better grasp and absorb the knowledge about AI concepts. One caveat in using effective scrollytelling is that the visualization design must align with the content; otherwise, it may lead to faulty interpretation, overwhelming visualization, and/or user frustration. To avoid these problems, our work adopted two main concepts in formulating the scrollytelling and its interaction components: a) the scrollytelling visualization design concept and b) the configuration in-between the scrolling concept. The first concept ensures that the content is laid out so that the scrollytelling can be applied effectively. While the latter concerns how to make the interchange between the fine- and coarse-grained levels of description noticeable and intuitive. To demonstrate our scrollytelling visualization design, the visual similarity matching problem is selected as it has many applications, e.g., visual recommendation and visual search; and it is the task assigned to the business marketing team, where we conducted our experiments. One of the popular techniques for the similarity matching task is the Siamese Neural Network (SNN), which has been implemented in the AI solution for real-world business problems [7]. In a sales pitch, the business marketing team normally gives a presentation and demo of the AI product, which only introduces the front end of the product. The team usually has trouble when there are questions about the back end, so the team needs an appealing tool, like the scrollytelling following our visualization design, to help introduce the SNN technique to the funders and management executives who are not familiar with the AI concept. Due to business confidential information and to make this work user-friendly, the scrollytelling visualization illustrates the SNN concept through cat breed images instead of product designs for which is intended. To verify the utility of our approach from the non-technical users' point of view, the observational study and feedback from the business marketing team are presented. Another important question is how the scrollytelling, created by using our visualization design, compared against other mediums in terms of ML knowledge acquisition for non-technical users. Due to business restrictions, we can't evaluate the executives or investors with the full questionnaire. Instead, we conducted experiments on another group of non-technical users, business IT students with no background in AI. We chose this user group because both investor and student users are on the receiving end, and both are unfamiliar with the SNN concepts. Student participants were asked to use different mediums, such as scrollytelling and online articles, and took the tests to assess their ability in knowledge acquisition. The contributions of this work include: * A novel scrollytelling visualization design for explaining the SNN concept in the visual similarity matching problem context from real business scenarios. * The visualization design effectively improves the understanding of non-technical users in how the deep learning model works. * A user study compares the ML knowledge acquisition of non-technical end-users across different mediums such as scrollytelling and online articles. The structure of this paper is as follows: a review of relevant literature in Section 2 and the explanation of the SNN model used to demonstrate our visualization design in Section 3. Then, the novel visualization design of the interactive scrollytelling is presented in Section 4. The evaluation and the case studies of the visualization are explained in Section 5. Section 6 shows the discussion of the aspects of visualization that result in the achievement of learning ML model comprehension. Finally, Section 7 summarizes the conclusions. ## 2 Related Work ### Visual Storytelling A visual narrative, also known as visual storytelling, is a story told primarily using visual media, which makes it appealing to the audience and easy to follow. The story can be told through still photography, illustration, or video and supplemented with graphics, music, voice, and other audio. A visual narrative is any story told visually [3]. Two issues must be addressed to create an effective visual narrative: the conversion of data into a story for communication purposes and narrative visualization. One of the methodologies for bridging the gap between gathering data and communicating is _story synthesis_[4]. Story synthesis provides an easy-to-use framework for making sense of complex data and attempts to assist stakeholders in turning analytical results into actionable information. To tell a story out of the findings, the analyst must first describe them as _story slices_, which are structured information derived from the study's original data. The story slices are then arranged in the appropriate sequence to reveal the connections between the various pieces of data. As support for implementing the visual analytics system, the story slices should include system functions. As a result, through data-driven story slices and narratives built from synthesized content, the story synthesis enables two-way linkages between the phases of research and storytelling, which can apply the story synthesis technique for data journalism [6]. ### Scrollytelling Creating a logical series of related data-driven visualizations, or visual pieces, required to present a message engagingly and successfully is known as _narrative visualization_[18]. On the web, scrollytelling is a visual storytelling approach for narrative visualization. Scrollytelling, also known as explorable explanations, dynamically updates the contents of a website when web page viewers scroll the page. Scrollytelling provides readers with a natural and rich experience. In short, the term'scrollytelling' was coined to characterize long-form internet storytelling that incorporates audio, video, and animation [24; 14]. Scrollytelling comes in a variety of forms, depending on the effect that scrolling a web page has as follows: _Scroll As Steps_, _Continuous Scrolling_, _Scroll As A Trigger_, _Mixed Scrollytelling_[1]. Readers may not have to guess what to tap, click, or swipe to engage the tale. A user's position could also trigger multimedia events like video playback, animation, and transitions--a dynamic interplay of text, visuals, and music. In addition, the study in [21] shows that scrollytelling and video provide much more memory than audio and, to a lesser extent, text media, which is suitable for non-technical users. Currently, little research has been conducted on scrollytelling to demonstrate how a machine learning model works [24]; it may be found in the Distill1, a collection of work on describing machine learning in web scrollytelling. ### Mediums for Storytelling Apart from scrollytelling, there are other mediums for narrative visualization. In Table 1, we compare different mediums, including online articles, videos, data-GIFs, data comics, and scrollytelling. The comparison is based on the supported content types, such as text, image, and animation, and the available features, such as play/pause, skipping, scrolling, and interaction. The articles are published in a newspaper, magazine, and, most notably, online articles. Authors can convey many messages through text and illustrations, whereas online articles offer more interaction options than paper-based. Videos and video clips are used broadly to refer to any video program uploaded to a website or other medium. Content creators can tell stories with full animation and graphics. Viewers can skip, rewind, forward, and pause the video but cannot interact with it. Data-GIFs are a new type of data-driven storytelling that employs simple visual messages embedded in 15-second animations. Creators can include as many graphics and animations as they want, but they have less narrative time. The interaction options with GIFs are limited as they can only be played or paused. Data comics are a novel approach to data-driven storytelling that employs sequential art inspired by the visual language of comics. The reader will feel more relaxed, similar to writing on-demand long-form online articles, but emphasizing images rather than text. Nonetheless, readers are unable to interact with the comics. Compared to these mediums, scrollytelling supports all content types and has the most features. In addition, it also gives users the flexibility to control the pace and provides a frictionless way to digest content. Thus, we select scrollytelling as the medium for our visual storytelling. ## 3 Siamese Neural Network This section explains the SNN used to showcase the proposed scrollytelling visualization design. First, the dataset used for demonstration is described followed by the key concept of SNN like distance metric learning (DML) and important components of SNN. ### Cat Breeds Dataset To make this work user-friendly, lovely cat pictures have been scientifically demonstrated to increase caring and concentration [17]; The dataset of cat breeds2 was chosen (Figure 2) to represent the interaction components via this proposed scrollytelling visualization for the internal model training process for non-technical consumers to notice the \begin{table} \begin{tabular}{l c c c c c c c} \hline \hline Mediums & Test & Image & Animation & Play/Pause & Skipping & Scrolling & Instersation \\ \hline Articles & ✓ & ✓ & ✓ & * & * & ✓ & * \\ Video & ✓ & ✓ & ✓ & ✓ & ✓ & - & - \\ Data-GIFs & ✓ & - & ✓ & ✓ & - & - & - \\ Data Comics & ✓ & ✓ & ✓ & * & * & ✓ & * \\ Scrollytelling & ✓ & ✓ & ✓ & * & ✓ & ✓ & ✓ \\ \hline \hline \end{tabular} An asterisk indicates the presence or absence of the features. \end{table} Table 1: Comparison of ability and characteristics between different types of mediums Figure 1: The embedding model (A) describes how image batches are fed into the feature-matching model and plots images as vectors in the embedding space. The solid arrow’s path (from A-B-C-D) depicts the core path of the model training step to get the trained model (B). Then inferencing to find the nearest neighbor of a test image among the trained images, which are bubbles inside the circle (C), and present the inferencing results from the previous bubbles and place them as the bars of similarity distance with the test image (D). The dashed arrow’s path provides related concepts that the users have to know regarding calculating the Euclidean distance to detect similarity (E) and the triplet loss function used during training (F). concept of a deep learning classifier for product design matching. We adopted the cat breeds dataset in our scrollytelling visualization design as a neutral dataset to be used anywhere and for any purpose in the presentation instead of using real products from the real business directly. Moreover, the cat dataset was related to the system's name, inspired by a Google image search for 'Siamese,' which returned the 'Siamese cat' known in Thai as 'Wichienmaat,' which we pick up as a homophone name for the system as 'VISHIEN-MAAT,' where 'VIS' stands for visualization. ### Distance Metric Learning DML aims to learn a transformation that converts images into a representation space where distance corresponds with a notion of similarity [19]. Example applications of metric learning include zero-shot learning [16; 5], visualization of high-dimensional data [15], dimensionality reduction [11], and face recognition and clustering [22]. There are several interactive online articles explaining how DML works and how it can be applied, like 'How to Use t-SNE Effectively' [29] that allows users to experiment with the parameters of the t-SNE algorithm, and 'Visualizing MNIST'3 that visualizes different types of embeddings produced by different algorithms. Footnote 3: [https://colah.github.io/posts/2014-10-Visualizing-MNIST/](https://colah.github.io/posts/2014-10-Visualizing-MNIST/) A technique for DML is the SNN [2], which is a sort of neural network that consists of many instances of the same model with the same architecture and weights. The embedding models for the query and reference images are learned simultaneously using the standard parameters. The vector representations of the two images are then compared to determine how similar they are. For both feature extraction and metric learning, SNN is a very powerful architecture [9]. An example of previous work in visual similarity matching using SNN and convolutional neural network (CNN) is the method in [27] to learn dyadic item co-occurrences. Another example is the method called 'MILDNet' [28], which is the SNN that uses a single CNN with skip connections as the subnetwork to encode both query and reference images as vectors; which, in turn, they are used in the calculation of similarity distance. ### Embedding Model Pixel images are more than raster images containing no direct information about the shape or structure of the objects in the image--the embedding model stores images in embeddings. In this low-dimensional space, crucial features for similarity matching are recorded. The embedding model is commonly computed using a deep convolutional neural network (CNN or ConvNet), which turns input images into embedding vectors. The model's back end is a standard multilayer perceptron layer to interpret the CNN features. A convolutional layer, a pooling layer, and a fully connected layer are the three layers that make up a CNN. A conservative CNN configuration uses filters and a kernel with an activation function. This is followed by a pooling layer, which reduces the convolutional layer's output. The CNN output is then flattened into one long vector to represent the 'features' extracted by the CNN, which we call the embedding vector. To see how these vectors are grouped together, we use t-SNE to show the vector representations from embedding models. The semantic comparison can only be made where the distance between vectors represents the similarity between objects. Figure 3: The architectures of the SNN model with two different loss functions: contrastive loss function (top) and triplet loss function (bottom). Figure 2: The cat breeds dataset visualized using t-SNE. ### Architecture information components, choose the content to be displayed, and assist in creating stories [23; 8]. Story synthesis is used to help handle information components by converting raw analysis findings into story slices that are ordered based on the critical linkages between them. It created a story that successfully conveys the operation of the machine learning model to recipients by utilizing appropriately formatted story slices to aid in story creation [26; 4]. We define the system functions required to explain the AI concept and to generate story slices ranging from showing visual displays, handling the collected objects, and supporting the story synthesis activities. ### Applying the Story Synthesis Framework The generic conceptual framework defines story synthesis as the process of generating story content and structure. This framework can be used in designing visual analytics systems that provide support for story synthesis, as detailed below: #### 4.2.1 Define the types and structures of story slices The story creator envisions the facts or patterns that can be identified and used as story slices based on the data and the analytic aims. Machine learning studies are often divided into sub-topics according to each model: input, feature extraction, classification, and output. Those sub-topics can be described based on the purpose of the specific use. For the SNN, we define the types and structures of story slices into six topics that we will use for different designs; the six-story slices are started based on an overview to show the concept of SNN, the related concept, _e.g._, embedding model, and euclidean distance. The stages of the model, _e.g._, loss function and model training and inferencing of the model results. To describe the aspect that the user needs to understand as follows: 1. **SNN Concept** slice explains the SNN, its origins, and its nomenclature. Users should grasp the model concept created from the twin convolutional neural network (CNN). 2. **Embedding Model** slice explains how to convert an image to a vector using an embedding model with a CNN background. Users should comprehend the concept of using CNN to turn a batch of images into embedding vectors placed in space. 3. **Euclidean Distance** slice describes the calculation used to determine similarity. Users should grasp the relationship between transformed vectors put in space that can be utilized to calculate euclidean distance equations. In subsequent phases, the distance results can be used to determine the loss function. 4. **Loss Function** slice demonstrates how the model's performance changes when different functions are used to calculate it. Users should comprehend when the distance changes concurrently with the movement of the embedding vectors. 5. **Training** slice depicts the training process's metamorphosis. Users need to watch the automated animation of the training process and understand what is going on in the background when the model is being trained in the development environment. 6. **Inferencing** slice represents the process of using model inference. Users should understand how the model's output can be used. One method is to find the similarity of a new image to the images in the trained embedding space. The last five slices will be treated as interactive components except the first one, which is the image. #### 4.2.2 Design a representation for a story slice After all six story slices have been drawn, the next step is to analyze what kind of presentation would be appropriate for the narrative of each story slice. The story creator chooses a suitable data structure, such as a graph or a vector, to represent extracted narrative slices internally in the system based on the story slice structure. This depiction may or may not be included in the final story; as a result, the story creator may decide to replace it with something else that is more appealing to the recipient during the storytelling stage. Fact types, visual graphs, and annotations are three areas in which a work [14] outlines rules that can build a set of visualization sequences as scrollytelling stories. The image and image description will be used to convey the story, and their interactions will be used to clarify further the story slices of the SNN concept (_e.g._, Embedding Model, Euclidean Distance, Loss Function, Training, Inferencing). The scatter-like bubbles were chosen to represent the import image data processed by the SNN. An interactive bubble chart was used to depict the model's story slice because the image data will be translated into space by placing a scatter-like containing image data. Bubbles' distribution was dispersed throughout the rectangular space. For bubbles' color encoding shows the difference and association between attributes. Our work also represents the color palette as brown, gray, black, and white for color encoding, which comes from the color palette of the Siamese Cat. #### 4.2.3 Define story synthesis support functions The next step is to analyze what functions the bubbles will have. The system functions are required for creating story slices or extracting story slices from visual displays, managing the objects retrieved, and supporting story synthesis activities. The SNN model concept was used to narrate the story slices. The bubbles within each story slice, also known as interactive components, have their own corresponding function, that can be interacted with by the users. For each interactive component, the corresponding functions as narrate as a story was created that may be difficult to understand for non-technical users, such as a math equation-like component, so it has more explorable by self-interaction and scrollytelling. #### 4.2.4 Design the visual analytics system The functions have been developed with bubbles into a visual analytics system. It must analyze the system based on the user and the techniques used to represent the support functions. This includes creating interactive and manipulable tools for the reader to find facts or patterns that may become story slices, such as clickable buttons or animated images that unfold for the reader as they scroll through the story. The scrollytelling visualization [10] will be used to depict all of the defining and designing of story slices from the story synthesis framework, allowing non-technical users, _e.g._, students who have never learned machine learning as a prerequisite course, to analyze the application of the SNN and recognize knowledge from the visualization [12]. ### Interaction Components Stories were presented using the six interaction components. These are the components where the user will pause during the scroll to understand the corresponding functions narrated as a story through each component better. It uses ReactUS in conjunction with the VEV4 platform on the front end. The backend manages data, uses Python to train and test models, and then generates JSON for the front end to render. Footnote 4: [https://a-vishienmaat.vev.site/siamese](https://a-vishienmaat.vev.site/siamese) #### 4.3.1 Showing encapsulation of the embedding model One proper approach to show the fast state transitions of non-complicated components during scrolling is to place the mouse cursor across that component and change the state to 'image hover'. That frees the user from thinking more about what this component wants to represent. The user can understand as soon as his mouse moves and hovers, intentionally and unintentionally. The first component of interaction is Image Hover effects enable the addition of interactivity to elements on a website without slowing it down. Hover effects are elegant, do not clog designs, and help websites run smoothly. The Image Hover (Figure 4) was used to demonstrate the second story slice, Embedding Model, which is the core for image embedding that encapsulates the CNN or ConvNet, the sourcing process for positioning an image into an embedded space. #### 4.3.2 Describing transition state of the space embedding To show the flow of data while changing the state of the data through different processes, it is possible to use a comparative representation of how the previous data was flowing through the process and the final result via 'image compare'. To scroll down to this point, the user will have to pause longer because there are many parts to communicate with this component, including snippets, processes, and results. Providing users with a pre-and post-process of state transition of space embedding which comparison can make them more understandable. Image Compare is the second interaction component, a simple but fully customizable before/after image comparison component. The component adds a vertical or horizontal slider to two overlapping images, allowing the user to adjust the slider to examine their differences. For a basic description of batch feeding to the embedding model for the process of locating the images for each batch into the embedded space, used the Image Compare (Figure 5) to support first and second story slices. #### 4.3.3 Variables of distance equation The commonly used method to show the data relationship in a node graph is to hover over a node and its node link called 'line hover'; here, the nodes are represented by the bubbles and the link between the nodes is the origin of the distance equation. When scrolling down to the component hovering over the node link, this is forth the user stops considering nodes and their relationship. They were hovering over a line expressing data functions similarly to hovering over an image. This will indicate how the value of that line has changed. Hovering on lines is used in this work to demonstrate the difference in how the distance between each line is calculated and to describe the distance between bubbles. A bubble is a point in an embedded image where a line is generated by joining two bubbles into two pairs (Figure 6) for supporting the third story slice. The first pair includes the anchor image's position and the image closest to it, called the positive image. The second pair consists of the anchor image and the image that differs from it, called the negative image. The similarity or differences may be calculated from the Euclidean distance equation. #### 4.3.4 Interpreting distance computation In addition to showing the correlation of data in a node graph by hovering over a node and a node link, each node is constantly changing shape. This is normal for model training, so the relationship between nodes will also change. Scroll down to this component to manually simulate the change in the node's position, which is the user's bubble via the 'draggable component'. It allows the user to freely change the position of the bubbles by dragging and dropping them to different places and seeing the change in the distance of each bubble by interpreting the distance and loss computation. When dragging an object to a new position, the distance between all objects is calculated and displayed as a loss value. This allows viewers to understand if objects of similar color are closer, the loss will decrease. Users can try iterating several times manually to see how the drag-and-drop changes in new locations. Figure 4: Hovering over the image reveals the embedding model’s internal operations. The draggable functionality on any DOM element allows moving the draggable object by clicking on it with the mouse and dragging it anywhere within the viewport. The draggable component (Figure 7) was used to help users understand how to calculate the loss value manually by dragging the bubbles representing the location of the embedded images, which is the loss function story slice. With each training cycle, the position of the bubbles will change according to these situations, the position of the positive bubble will move closer to the anchor bubble, and the negative bubble will move away from the anchor bubble. The user can simulate the above situation by dragging the three bubbles closer to or farther from each other. The closer the positive bubble moves to the anchor bubble, the greater the loss decreases. The further the negative bubble moves away from the anchor bubble, the greater the loss decreases. #### 4.3.5 Showing the model training step To show the model training changes in each epoch, a continuous motion picture of bubbles is gradually grouped; this section is highlighted, showing the grouping of dispersed bubbles. However, the change in each epoch happens quickly, so to see the movement in each step, it is necessary to pause. The user can understand each bubble by hovering the mouse over the bubble's location of interest. The properties we can use for this need are 'animation-play-state', in which the user can pause/play the stage-change animation in CSS. The animation-play-state sets whether an animation is running or paused. Resuming a paused animation will pick up where it left off when it was paused, rather than starting from the beginning of the animation sequence. The image in the actual training model is a moving image (replaced by bubbles) during the development process. Under the initial condition, similar images move closer, and different images will move away, which can calculate the loss. The loss value is calculated for each training cycle. It is plotted as a graph between the x-axis (number of training cycles or steps/epochs) and the y-axis (loss), where the expected loss will be decreased gradually for each step. This loss chart is used to decide when to stop training; when the loss chart decreases and runs as a plateau for a while, the developer will order the training to stop. That means no further training has been learned, and the loss has not decreased significantly anymore. The processes can also be displayed as animation-play-state (Figure 8), so the user can press play/pause to view the model's training on-demand and hover over each bubble to view the original image of each bubble, which is the training story slice. It will find that the bubbles are encoded with the primary color of the cats, _e.g._, black for black cats and orange for Ginger cats. The similar colors (dark brown and light brown stand for Siamese Cat and Balinese Cat, respectively) will move towards each other, which means it is a similar image. Both breeds of cats are similar in color but differ in hair length. Moreover, bubbles with different colors from those, such as orange and black, will move away from the group of dark brown and light brown bubbles. This means they are separated by a distinctly different color to the brown group (Siamese and Balinese), for Figure 5: Image comparison by sliding the bar left/right reveals the images’ transformation into bubbles on the embedded space. Figure 6: The variables of the Euclidean distance equation are described by hovering the lines between the bubbles. Figure 7: Drag and drop bubbles in various spots and set the margin to see the loss calculation. example, the Ginger cat, black cats, etc. #### 4.3.6 Inference a new embedded image Allowing users to interact with the process of implementing the model in real-world scenarios: importing a new image, which is the inference of the new image as embedded in the system to compile it with a trained model. Importing corresponds to different ways of communication, such as uploading, putting, and throwing. Directly presenting the colloquial language is better for communication. So we chose to use 'drag and drop,' representing putting or throwing the new image into the space of the trained model. In a complex drag-and-drop interface, while keeping the components decoupled, the components change their appearance and the application state in response to the drag-and-drop events. It is a perfect fit for dragging data transfers between different application parts--the inferencing section, which represents the state of transition of an image for testing with the model. The test image will be thrown into the Embedding Model, and then it will automatically be embedded and shown into the space. Drag and drop were chosen in this throwing because it is an easy-to-understand interaction for the user by having the user interaction initiate the flow of inferencing (Figure 9), which is the sixth story slice. In addition, a simple interaction, such as clicking on a newly changed point, was applied to the test image that had just been embedded in the space. The test image was replaced with a bubble and encoded with a new color that was distinct from the other bubbles in the space. Users can view similar members of the test bubble by clicking on the newly embedded bubble; the system will display a circle centered on the test bubble with a radius covering the nearest bubble side by side. Moreover, the system will show the nearby image by comparing the distance. Finding the closest object can be calculated using an algorithm like the k-nearest neighbor. ## 5 Evaluation We began by describing the two scenarios in which our scrollytelling visualization design has been used: for business presentations and learning the model. We use observational and user studies to get a qualitative and quantitative look at the two use cases. The study's structure has a participant, procedure, and feedback or results. ### Usage Scenarios Two scenarios explain the scenario used for the business marketing team's business presentation in a company and student learning of the business IT student who had never learned about machine learning. #### 5.1.1 Business presentation scenario This scrollytelling was created as one of the SNN model's presentations as part of a project for an image-searchable e-commerce website. The SNN model groups images of building materials and upholstered furniture into clusters. The model builder's issues are that they must modify the presentation each time they meet a new group audience to make it relevant. Because they have to remember terms for building materials and architecture as well as the complex ideas of the deep learning model, it can be hard for the audience to understand how the model works when it is shown with real-life product pictures. Cat images were used to explain how the model works to make it easier to remember domain-specific terms. Audiences who are non-technical do not need to pay attention to the domain terminology because cat images are easy to get familiar with. In addition, to make it easier for an audience to digest, the complex content of the deep learning model is broken down into parts so the narrator can describe the model in sections by arranging story slices according to the model's training process. Some things, like math equations, may be hard to understand while listening to the story, but they can be made clear by playing with the bubbles that show the equations. Users can play through the scrollytelling website at the audience's pace or be used in the presentation at the presenter's pace, making our scrollytelling visualization design for explaining deep learning concepts to the audience. The three types of user results will be shown in the observational study in Section 5.2. #### 5.1.2 Student learning scenario Knowledge about the algorithms of various deep learning models is necessary for the management information system (MIS) nowadays. People working in this area in the MIS field need to understand various business requirements from technical and business stakeholders since MIS people are in the middle between the very technical developers and the non-technical business users. Nevertheless, teaching deep learning concepts to business IT students who learn about MIS can be tricky since they need to understand the complexity of the concepts from both points of view--technical and business. This scrollytelling work can be an effective teaching tool for students, where student users can learn on-demand by scrolling and reviewing the interactions in each story slice representing each model process which is an alternative to online articles. The results of this user group will be shown in the user study in Section 5.3. ### Observational Study We conducted an observational study to investigate how VISHIEN-MAAT's target users (_i.e._, the business marketing and related teams in a machine learning (ML) project) would use this scrollytelling to learn about SNN and give its feedback. #### 5.2.1 Participants We asked the presenters from the business marketing team from real business about their experiences with the scrollytelling that explains the SNN model to evaluate the scrollytelling visualization that illustrates the AI concept in the ML project related to SNN. They have been working on parts of this ML project for about three years. Also, the customer's audience's opinions and feelings are gathered during the meeting. #### 5.2.2 Procedure Because collecting quantitative evaluation data is difficult due to business constraints and workplace culture, we summarize user feedback through observation and semi- and informal interviews of presenters and audiences, such as focus groups. We used scrollytelling as part of the product presentation during the demo. The proposed scrollytelling explains the backstory of this existing product, which uses SNN as the system's core. #### 5.2.3 Feedback The proposed scrollytelling visualization was presented to the users, and here is their feedback. For the feedback overview, the best thing about this project is that it lets people use interactive media to learn about hard ideas at their own pace. The audiences mentioned that this scrollytelling medium combines the merits of different mediums, _i.e._, articles and videos, in which articles allow users to experience at the reader's pace. In contrast, videos break down complex ideas into more appealing visuals. Controlling the pace and narrative style of scrollytelling is critical for enabling media consumers to learn on demand. In other words, if an issue cannot be handled while watching, it may be scrolled up and down, suggesting it has control over the narrative flow. Moreover, both audiences and the business team, as the presenters mentioned the interactions are added in each part along the way as the users scroll down, which allows users to explore the effects of these concepts in different scenarios. This is useful for reinforcing knowledge of some complex machine learning concepts. With these advantages, the scrollytelling visualization was used in business conference presentations to provide a quick summary of how the specific ML model works without wasting time explaining it to the audiences. Also, when giving a short presentation, scrollytelling makes it easier for presenters to give their talks by letting them skim through the visualization. Moreover, during the Q&A session, the presenter can use the interactions to explain specific concepts that were asked in detail, and the audience can replay it themselves simultaneously. ### User Study We conducted a statistical user study to determine how VISHIEN-MAAT's target users (i.e., business IT students) would use scrollytelling as a medium tutorial to learn about SNN in comparison to other mediums. #### 5.3.1 Participants This user study assesses the role of visualization in knowledge acquisition for non-technical users. Participants in the user study include 50 business IT students in the academic year 2021/2022. They enroll in the class about business intelligence and tools for data analytics, which has to learn about data science and machine learning too, at Thammasat University in Thailand. The students are expected to be able to communicate about AI/machine learning to business stakeholders and non-technical users. However, unlike engineering students, business IT students do not possess in-depth knowledge of AI/machine learning. They were divided into two groups for testing in two different mediums (scrollytelling and online articles). #### 5.3.2 Procedure This study will look into whether our scrollytelling visualization can help students learn the concept and application of SNN better than traditional materials. The pre-test and post-test are created to evaluate participants' understandings after learning the topics using two different mediums--scrollytelling, and online articles. We select online articles about image similarity estimation using a Siamese Network with a contrastive loss5 and triplet loss6 from Keras. It is a well-known open-source software library that provides a Python interface for artificial neural networks which the topics are related to our work and are being used in class nowadays. Footnote 5: [https://keras.io/examples/vision/siamese__o_ntrastive/](https://keras.io/examples/vision/siamese__o_ntrastive/) Footnote 6: [https://keras.io/examples/vision/siamese_network/](https://keras.io/examples/vision/siamese_network/) The online articles were chosen for comparison because they are both common choices for extra reading and have a scrolling feature like scrollytelling. Figure 8: The user may observe a training model that compares bubble movements to loss graphs by clicking the play/pause buttons. Figure 9: A new test image is fed into the embedding model, which forms a bubble in a trained embedded space to find nearby images within the new test image’s radius. With this feature, the user can control how fast the story moves by pausing or scrolling at any point while playing. This feature allows users to study the content of interest according to their needs, which is different from the other comparable genre, _e.g._, data comics, and data-GIFs. Seven multiple-choice questions are on the pre-test, and the same seven questions are on the post-test. The test topics include the SNN model concept, Embedding model, Loss function, Euclidean distance, Model training, and Inferencing. Each question has four multiple choices with one correct expected answer that will receive one point when answered correctly. The test questions and their correct answers as italic are listed in Appendix A, and they are given to participants in this order, with no shuffle. The expected results of the seven questions that aim to reflect the user understanding regarding the Siamese Neural Network can be described as follows: 1. The first question can represent the conceptual architecture of the model such that Siamese words, in this case, mean Siamese Twin, which describes the model origin, which is sometimes called a twin neural network that uses the same weights while working in tandem on two different input vectors. 2. The second question needs to know that the user can understand the different loss functions used in the Siamese Neural Network depending on the architecture of the model, which is twin and triplet. 3. The third question needs the user to know the CNN, which is a core concept of each neural network in the twin and triplet architecture that suits the work for the embedded images into space. 4. The fourth question is meant to find out if the user knows the name of the input image for the triplet architecture, which can be used to point to the expected location of the embedded object in the space. 5. The fifth question relies on the previous question, that the user should observe the movement of each triplet image as it moves closer or far away. 6. The sixth question also relies on the previous question, that the user should tell us about the movement's meaning, which expects that the user knows that the similar image should be moved closer to each other than the different image. 7. The last question needs the user to tell the application of the business problem that the model-trained result can solve. For example, use the trained model to find an image similar to another by comparing all images' closeness. The quasi-experiment was applied, and we define the process of the user study as consisting of four steps [20], as follows: 1. All participants were asked to complete the pre-test. 2. The participants were divided into two groups. They were asked to learn about the specific AI model but with different mediums. The first group uses the VISHIEN-MAAT: Siamese Neural Network Scrollytelling, while the second group studies through the two online articles from Keras, which were used as the basic course material for the students, about the SNN model with contrastive loss and triplet loss. The students need to take time in class to read the articles because they are required to read the two articles one by one with no skipping. 3. All participants are asked to complete the post-test. 4. To compare the results of two groups, perform a quantitative analysis with a t-test dependent sample by condition: \(alpha=0.05\), 2-tails. The test instrument was used for pre-test and post-test with identical question characteristics on each test related to Siamese Neural Network. The pre-test was given before the two groups were subjected to scrollytelling for the experimental, and online articles were given for the control group. After treatment was applied to the experimental and control groups, the post-test was granted which the raw results are in Appendix B. The next step was to compare the pre-test results to the ones of the post-test for both groups. #### 5.3.3 Result In this study, data was analyzed using Levene's test to determine variance equality. The data is equal variances assumed if Levene's test output F-test significance is more than 0.05. As an inferential statistical test, the independent sample t-Test criteria with a value of 0.05 were used. Rejection of the null hypothesis was done if the result is less than the critical value. On the contrary, failing to reject the null hypothesis was concluded if the test statistic is greater than the critical value. The comparison of the pre-test of seven questions results between the scrollytelling and online articles mediums sorted displayed in table 2. In the comparison of the scrollytelling and online articles, the pre-test means the score is 2.92 and 3.04 respectively. While the post-test comparison between scrollytelling and online article mediums is shown in Tables 3. Table 3 displays that the F-test of post-test data with the test of equality of variances with F-test is 0.09 and sig value is 0.771 bigger than 0.05 (Sig. > 0.05), that is equal variances assumed between scrollytelling and online articles mediums. Then do the t-Test of post-test data at the first line of equal variances assumed with df is 48.00 in scrollytelling and online articles mediums with sig value 2-tailed is 0.000 smaller than 0.05 (Sig. < 0.05). In addition to this, the calculated t-values for the post-test of scrollytelling and online articles mediums \begin{table} \begin{tabular}{l c c c c c} \hline Mediums & N & Pre test mean & Post-test mean & Std. Deviation & S.F. Mean \\ \hline Scrollytelling & 26 & 2.92 & 5.85 & 1.54 & 30 \\ Online articles & 24 & 3.04 & 3.96 & 1.46 & 30 \\ \hline \end{tabular} \end{table} Table 2: Summary table for comparing tests from different mediums achieve 4.44 more significance than the t-test value. These t-test results indicate that the null hypothesis is rejected, showing a significant difference between the post-test of experimental and control groups, which is the mean of scrollytelling (post-test mean is 5.85) is higher than the mean of the online articles (post-test mean is 3.96). ## 6 Discussion Based on the visual similarity matching problem context in real business, we designed the scrollytelling visualization of the SNN model for business presentations. The above goal can be expanded to include a learning test with business students to compare to another medium. In the business presentation scenario, the audience and presenters used the scrolling features to control the story's pace and the interactive parts to freely explore the steps of the model during the sales meeting. The user study results show that the scrollytelling visualization design is a better way to support the achievement of learning SNN model concepts than online articles. The students may understand the ML concept, especially SNN, more efficiently with the scrollytelling visualization design because they have complete control of the pace. They can go over complex or new ideas again if needed by scrolling up or down to pause, play, rewind, or even fast-forward the material on demand. Moreover, users can also participate in the learning via interactive components, where they can explore what will happen in different steps of the SNN. We discuss how scrollytelling can help close gaps in online articles, video tutorials, data comics, and data-GIFs. The online articles are unappealing, causing the user to lose focus on the content. Our scrollytelling makes use of interaction and animation to keep users' attention. The disadvantage of video tutorials is that users cannot control the narrative's pace. However, with scrollytelling, users can scroll up and down to meet their needs. Because no interactions are available, data comics have the same limitations as online articles. The main advantage of data comics over online articles is that they appeal to the reader; scrollytelling, on the other hand, uses interaction and animation to help catch the attention of users. In some ways, the fact that data-GIFs can make an animation between the content and themselves makes them easier to understand. However, there are time constraints on when data-GIFs can be used. Scrollytelling, on the other hand, is open to interpretation as long as the creators of the content determine what is appropriate. The complexity of the deep learning concept that needs to be communicated is the limitation of the scrollytelling design. When the stories are complicated, the scrollytelling will be longer. Rich interactive experiences can be overwhelming, so the length of the stories should be limited to keep the effectiveness of the visual storytelling. ## 7 Conclusion and Future Work This study presents a novel visualization design to help comprehend the Siamese Neural Network (SNN) model, which is easier to realize than traditional mediums for non-technical users. The interactive scrollytelling visualization demonstrates that the proposed approach effectively explains the similarity matching model used in the industry and communicates the model's mechanism for business and education purposes. With interactive scrollytelling, users can look into certain points at their own pace, and the interactive parts break down difficult information into pieces that are easy to understand. Both quantitative and qualitative evaluations suggest that our method helps non-technical end-users, _i.e._, the business marketing team and the business IT students, quickly understand the deep learning model. The design for explaining the SNN model allows for advanced techniques such as 3D interaction components, which is a good opportunity for future work in which we can apply this scrollytelling design and add the step of rich details and rich interactions in each story slice. ## Acknowledgement This paper is partially supported by the National Natural Science Foundation of China (No. 62132017). The first author wishes to thank Mr. Wissarut Pimanmassuriya, Ms. Panita Rerkpitivit, Ms. Chalida Liamwiset and Mr. Kittikun Kamrai for their valuable technical, data support on this project.
2310.17404
Invariance Measures for Neural Networks
Invariances in neural networks are useful and necessary for many tasks. However, the representation of the invariance of most neural network models has not been characterized. We propose measures to quantify the invariance of neural networks in terms of their internal representation. The measures are efficient and interpretable, and can be applied to any neural network model. They are also more sensitive to invariance than previously defined measures. We validate the measures and their properties in the domain of affine transformations and the CIFAR10 and MNIST datasets, including their stability and interpretability. Using the measures, we perform a first analysis of CNN models and show that their internal invariance is remarkably stable to random weight initializations, but not to changes in dataset or transformation. We believe the measures will enable new avenues of research in invariance representation.
Facundo Manuel Quiroga, Jordina Torrents-Barrena, Laura Cristina Lanzarini, Domenec Puig-Valls
2023-10-26T13:59:39Z
http://arxiv.org/abs/2310.17404v1
# Invariance Measures for Neural Networks ###### Abstract Invariances in neural networks are useful and necessary for many tasks. However, the representation of the invariance of most neural network models has not been characterized. We propose measures to quantify the invariance of neural networks in terms of their internal representation. The measures are efficient and interpretable, and can be applied to any neural network model. They are also more sensitive to invariance than previously defined measures. We validate the measures and their properties in the domain of affine transformations and the CIFAR10 and MNIST datasets, including their stability and interpretability. Using the measures, we perform a first analysis of CNN models and show that their internal invariance is remarkably stable to random weight initializations, but not to changes in dataset or transformation. We believe the measures will enable new avenues of research in invariance representation. keywords: Invariance, Neural Networks, Transformations, Convolutional Neural Networks, Measures + Footnote †: journal: Applied Soft Computing ## 1 Introduction Neural networks (NNs) are currently the state of the art for many problems in machine learning. In particular, convolutional neural networks (CNNs) achieve very good results in many computer vision applications [1]. However, NNs can have difficulties learning good representations when inputs are transformed. For example, the classification of texture or star images generally requires invariance to rotation, scale and/or translation transformation [2; 3], and face or body estimation models require pose-invariance [4; 5; 6]. The properties of invariance and equivariance explain how a model reacts to transformations of its inputs. Understanding the invariance and equivariance of a network [7; 8; 9; 10] or any system [11] can help to improve their performance and robustness. There are various ways to achieve invariance or equivariance in a model. However, how these properties are encoded internally in a trained NN is generally unknown. In this work, we present simple, efficient and interpretable measures of invariance for NNs' _internal representations_. These measures allow understanding the distribution of invariance of NN's activations, and their structure after a suitable analysis. The following subsections present and expand common definitions of invariance and Equivariance and summarize previous approaches for measuring invariance and equivariance. ### Invariance, Same-Equivariance and Equivariance Properties To deal gracefully with transformations, such as rotations of an image, requires the properties of _invariance_ o _equivariance_ in a network, with respect to a properly defined set of transformations \(\mathrm{T}=[\mathrm{t}_{1},\mathrm{t}_{2},...,\mathrm{t}_{\mathrm{m}}]\)1. A network f is invariant to a single transformation t if transforming the input x with t does not change the network output. Formally, \(\forall\)x, f(t(x)) = f(x). The notion of invariance can be generalized such that if \(\mathrm{T}=\left[\begin{smallmatrix}\texttt{t}_{1}&\ldots&\texttt{t}_{m}\end{smallmatrix}\right]\) is a set of m transformations, then f is invariant to T whenever \(\forall\)x, f(t\({}_{1}\)(x)) = f(t\({}_{2}\)(x)) = \(\cdots\) = f(t\({}_{m}\)(x)). As shown in figure 1 (a), an invariant function produces the same output for all transformations T of its inputs. _Same-equivariance_ is a property related to invariance. A function is same-equivariant if the _same_ transformation t can be applied either to the input or the output of the function (figure 1 (b)). Therefore, f is same-equivariant to t if f(t(x)) = t(f(x)) \(\forall\)x [2]. The generalization of both notions is _equivariance_. A function f is equivariant to t whenever f's output changes _predictably_ if x is transformed by t. Formally, it is equivariant if there exists a corresponding function t' such that \(\forall\) x, we have f(t(x)) = t'(f(x)) [2]; the structure of t' gives rise to the predictability. Note that in this definition, t' acts on the activation f(x), while t acts on the input x. Therefore the function t' is not restricted to act in any way similar to t as is the case in same-equivariance. As an example, figure 1 (c) shows how rotations in the input can correspond to translations in the output. The notions of equivariance and same-equivariance can be generalized to a set of transformations as with the invariance case. ### Invariance Measures Given a network and a set of transformations, invariance (and other related properties) can be measured in different ways. To determine if f is equivariant to a transformation t that operates on inputs x, we need to find a corresponding transformation t' that operates on the output space of f(x) [8]. A sufficient condition for the existence of t' is that f is invertible. Since CNNs are approximately invertible[12; 8], the approximate existence of t' is very likely. However, determining t' can be very difficult; known approacher require assuming its functional form and estimating its parameters [8]. Therefore, empirically analyzing the equivariance of a CNN can be difficult [8]. Measuring invariance does not require an estimation of t'. Therefore, invariance can be computationally and conceptually easier to measure. Since invariance is a special case of equivariance, where t' is the identity transformation, we can exploit this special structure to measure invariance in simple and efficient ways. Most previous work has focused on empirically measuring the invariance of the _final output_ of the network. The simplest invariance measures quantify the invariance of the final output of the network (ie. softmax Figure 1: Illustration of invariance, same-equivariance and equivariance of f(x) to the set of rotation transformations T = [t\({}_{1},t\)\({}_{2},t\)\({}_{3},t\)\({}_{4}\)] corresponding to rotations of \([0\)°, \(90\)°, \(180\)°, \(270\)°] respectively and image inputs. In (a), the feature map calculated by f is the same regardless of the transformation of the input, and therefore f is invariant to T. In (b), the feature map is rotated in the same fashion as the input image and f is same-equivariant to T. In (c), rotations of the input correspond to translations of the feature map, where the magnitude of the translation is a function of the magnitude of the rotation angle, so that f is same equivariant to T, but the corresponding set T’ \(\neq\) T since it consists of translations. layer for classification models) by measuring their changes with respect to a set of transformations of the input. Figure 2 (a) illustrates this pattern of analysis, where only the first and last layers of the network are taken into account. Alternatively, we can consider a single node or layer, and measure its invariance with respect to its particular input and output, without taking into account the interactions with the rest of the network (figure 2 (b)). However, in general, these cases are simpler and can be handled analytically [13]. Given that most networks can be represented by an acyclic graph, we can generalize these notions by, for example, considering all intermediate values computed by the network as possible inputs or outputs. For simplicity, in the following, we will call these values _activations_ of the network. For example, we can consider all activations as outputs and measure their invariance with respect to the initial input to the network (figure 2 (c)). Alternatively, via a topological sort of the network graph, we can remove the first \(\mathrm{k}-1\) nodes or layers from consideration, and calculate the invariance of the output of the network with respect to transformations of the input to node or layer \(\mathrm{k}\) (figure 2 (d)). In summary, we can arbitrarily define a set of inputs to transform and a set of outputs to evaluate invariance or another such property. In this work, we focus on case (c), transforming only the input to the whole network and analyzing the impact of the transformation on all activations. ### Contributions In this work, we focus on measuring the invariance of the _internal representation_ of a network. As explained before, the invariance of the activations of a network can be estimated by measuring how they change with respect to transformations of its inputs. In this work, we define measures that can empirically quantify the invariance of a NN with respect to a set of transformations following this principle. Since these invariance measures can quantify the invariance of activations or intermediate layers, they allow visualizing how invariant a network is as a whole and by layers or individual activations. Therefore, the measures can provide insights into how invariance is encoded and represented in a NN. The measures can be applied to any NN, irrespective of its design or architecture, as well as any set of discrete and finite transformations. To keep the presentation short, in this work we focus on evaluating and applying these measures to CNNs, and affine transformations, since computer vision is an area where invariance is widely required and rotations, scaling, and translations are arguably the most common types of transformations. In summary, our main contributions are: 1. A variance-based measure of the invariance of a NN, or indeed any model that computes an internal representation. 2. A distance-based alternative measure, which is a generalization of the variance case, as well as specializations for class-conditional invariances and convolutional layers. 3. An efficient method for computing the measures. Figure 2: Different approaches for calculating invariance in a NN with activations \(\mathrm{a}_{1},\mathrm{a}_{2},...\mathrm{a}_{6}\). Green nodes indicate transformed inputs, and red nodes indicate where the invariance is calculated. In (a), the invariance of the final output \(\mathrm{y}=\mathrm{a}_{6}(\mathrm{x})\) of the network is measured with respect to transformations of the initial input \(\mathrm{x}\). In (b), the invariance of a single node or layer a3 is calculated with respect to its direct inputs \(\mathrm{a}_{1}\) an \(\mathrm{a}_{2}\). In (c), the invariances of all nodes are calculated with respect to the input \(\mathrm{x}\). Finally, in (d) the invariance of the output \(\mathrm{y}\) is calculated with respect to the output of activations \(\mathrm{a}_{4}\), \(\mathrm{a}_{5}\) and \(\mathrm{a}_{6}\). 4. An evaluation of these measures and their variants, in terms of their validity and stability. 5. A comparison with the only other invariance measure described in the literature [7]. This work is organized as follows. Section 2 gives a summary of the state of the art for invariant and equivariant models, as well as previous measures related to the ones we propose. Section 3 describes the proposed measures. Section 4 presents experiments that form an empirical validation of the measures on a small but representative set of models, datasets, and transformations, along with their comparison and analysis. Finally, section 5 contains the conclusions and future work. ## 2 State of the art In this section, we review previous measures proposed to quantify invariance. Invariance (and equivariance) can be measured indirectly, by observing the variations of the accuracy of a model as a function of the transformation of its inputs. In this way, several authors have defined several _accuracy-based_ measures and diagrams. On the other hand, more detailed measures can be defined if the internal representation or _activations_ of the network can be accessed; we will call these _activation-based_ measures of invariance. ### Accuracy-based measures There have been various methods proposed to measure invariance in NNs indirectly by measuring the changes in the accuracy of the model with respect to changes in the input [14; 15; 16; 17; 18; 19; 20; 21; 22; 23]. In this case we refer to these as _accuracy-based_ because these works mostly focus on measuring accuracy as a function of the transformation of the input but the same technique can be applied to other error functions such as the mean squared error. In [22; 23; 24] the authors measure the effect of using different data augmentation schemes and CNNs architectures on the final accuracy of the method. To visualize the results, [23] proposed using a Translation Sensitivity Map that relates the classifier accuracy with the center position of the object in the image. In a similar fashion, [22; 23] used equivalent 1D plots to evaluate invariances to rotation and other transformations. [19; 16] focused on algorithms for determining the appropriate transformations to which the network is invariant, but also focused on accuracy as an indirect measure of invariance. Finally, invariance has been also studied from an adversarial perspective, since measuring invariance to transformations can be considered equivalent to measuring the effectiveness of an attack by considering the transformations as attacks[21; 20]. Therefore, measuring the required complexity of a transformation needed to decrease the accuracy of a classifier is equivalent to measuring the strength of an adversarial perturbation. All of these invariance quantification methods focus on final accuracy instead of understanding the internal representations of the invariance in the network. ### Activation-based measures To the best of our knowledge, there are only two previously proposed measures of internal invariance or equivariance of the network [7; 8]. These have been used only once each [9; 25] after their publication to analyze specific properties of models. We briefly review these measures. #### 2.2.1 Invariance Measure of Goodfellow Goodfellow et al. [7] were the first to propose a measure of invariance over the activations of the network instead of just the outputs. We recreate the definition of their measure here since it is conceptually similar to our own. They define their measure in terms of the _firing rate_ of activations. They assume that for each activation a there is an associated threshold U, so that if \(\mathrm{a(x)>U}\) then that unit is _firing_. Given a parametrized transformation \(\mathrm{t(x,\gamma)}\), where \(\mathrm{x}\) is the input and \(\gamma\) the parameter, the define a set of transformations of \(\mathrm{x}\) as \(\mathrm{T(x)=[t(x,\gamma)\mid\gamma\in\Gamma]}\). parametrized by a finite set of parameters \(\Gamma\). Their measure \(\mathrm{GF(a,X)}\) (equation 1) for activation a and dataset \(\mathrm{X}\) is defined as the ratio between a _local_ firing rate \(\mathrm{Local(a,X)}\) and a _global_ firing rate \(\mathrm{Global(a,X)}\) (equations 2 and 3): \[\mathrm{GF}_{U}(a,X) =\frac{Local_{U}(a,X)}{Global_{U}(a,X)} \tag{1}\] \[\mathrm{Local}_{U}(a,X) =\frac{1}{|X|}\sum_{x\in X}\frac{1}{|T(x)|}\sum_{x^{\prime}\in T(x )}f(x^{\prime},U)\] (2) \[\mathrm{Global}_{U}(a,X) =\underset{x\sim P(x)}{\mathrm{E}}[f(a,U)] \tag{3}\] Where: * \(f(x,U)=\begin{cases}1&\text{if a(x)}>U\\ 0&\text{otherwise}\end{cases}\). * \(P(x)\) is a distribution over the samples. While the measure is defined over an arbitrary set X, in practice the expectation in equation 3 is calculated over a fixed-size dataset of n samples. The threshold U, which can be different for different activations, is selected so that \(G(a)=\alpha\), where \(\alpha=0.01\) in their experiments. GF(a) is the invariance score for a single activation. To evaluate the invariance of an entire network N, they define \(\mathrm{Invp(N)}\) as the mean of the top-scoring proportion p of activations in the deepest layer of N. They discard the \(1-p\) percentage of the least invariant activations on the hypothesis that _different sub-populations of units may be invariant to different transformations_. While this may be true for some datasets and models [10], they offer no method to determine the value of p and no support for that hypothesis. Indeed, different values of p may be required for different transformations. Finally, we note that this scheme is orthogonal to the definition of the measure since it can be applied to any invariance measure. The measure has other difficulties as well for its practical application. First, the criteria for the selection of the threshold poses a performance problem since it requires calculating a percentile. The percentile cannot be computed in an online fashion unless using an approximation, and therefore requires storing in memory the n values of the activation a for the n samples and \(\mathcal{O}(nlog(n))\) operations. Furthermore, there is no justification for the use \(\alpha=0.01\) to determine the value of U. Finally, the use of a threshold and the notion of the _firing rate_ of an activation, while popular for models of biological NNs [26], has limited value for modern NN architectures. Finally, since their work was presented in 2009, the measure was used to evaluate architectures that used layer-wise unsupervised pretraining and convolutional deep belief networks, with synthetic images and a custom video dataset. Since then, to the best of our knowledge only [9] have used their method to evaluate models for invariance in a limited fashion on the CIFAR10 and ImageNet datasets to then propose a new activation function. #### 2.2.2 Equivariance measure of Lenc Lenc et al.[8] measured the _equivariance_ of the activations of a CNN given by \(y=f(x)\) with respect to a transformation \(T\). The transformation was applied to the input \(x\). In order to make possible the search for the equivariant function, they assume that the set of possible equivariances of the model f is a subset of the affine functions. That assumption allows optimizing the parameters of \(T^{\prime}\) after a network is trained via traditional gradient descent based algorithms using the error function \(|L(T(x))-T^{\prime}(L(x))|\), where L is the function that computes a given convolutional layer with respect to the initial input to the network. A different transformation \(T^{\prime}\) was then estimated for each layer, using the total network error as a loss function. A particular distance [8] from \(A_{T}\) to the identity matrix was utilized as an invariance measure of the layer's representation. Although this approach measures the equivariance, it _i)_ only deals with affine transformations, which limits its applicability to convolutional layers as a spatial correspondence for the affine map is needed, _ii)_ requires an optimization process, and _iii)_ is not simple to interpret. Since then, to the best of our knowledge only [25] used Lenc's measure to evaluate models and, in turn, modified the loss function to improve the equivariance and invariance capabilities of a model. However, the authors used the technique to estimate the impact of this new loss function only for the last network layer, and not the internal representation. ### Other related techniques Measurement InvarianceAlso known as _Factorial Invariance_[27; 28], Measurement Invariance is a well-established field that seeks to provide statistical models with the property of measuring the same construct across different groups. For example, its techniques can be used to determine whether a certain measure is invariant to different race or gender groups, by analyzing the behavior of its latent variables, analytically or empirically. However, Measurement Invariance methods are focused on statistical modeling techniques such as Confirmatory Factor Analysis and cannot be applied directly to NNs [28]. Ad-hoc invariance measuresIn many areas, models or quantities are have been hypothesized to be invariant to certain variables [11]. In turn, ad-hoc techniques have been created to test those hypotheses of invariances [11]. Invariance measure of Quiroga et al.Quiroga et al. [29] defined a variance-based invariance measure. This work extends that formulation by refining and expanding the definition of the measure, adding an alternative distance-based variance measure, and performing thorough validation of the measure. The details of the measures are described in Section 3. Qualitative EvaluationAnother approach to determine the invariance or equivariance of features consists of evaluating them qualitatively. For example, visualizations have been used to understand invariance and other properties such as diversity and discrimination in CNNs [30; 31; 32; 10]. However, this approach is limited in that a useful visualization must exist for each type of feature and its interpretation can be subjective. Other Transformational MeasuresWhile invariance and equivariance are very important properties, other authors have defined measures for other properties such as feature complexity, invertibility, selectivity, capacity, and attention [33; 34; 35]. ## 3 Measures In this section, we define the Invariance measures along with a general framework for defining new transformational measures. ### General framework Our objective is to compute a measure of the activations of a model f with respect to a set of transformations T of its input x. In this case, our measures will be of invariance, but they could also target another type of property. Given an input x, a neural network model contains computes many intermediate or hidden values, which we will call \(\mathrm{a}_{1}(\mathrm{x}),...,\mathrm{a}_{k}(\mathrm{x})\). In the interest of brevity, we will call such values \(\mathrm{a}_{i}(\mathrm{x})\)_activations_ of the network f. This term is not to be confused with _activation functions_ such as ReLU or TanH. An activation _can_ be the result of applying an activation function to a tensor, or simply the output of a convolutional or fully connected layer. Note that x always refers to the input of the whole network, and never to the input of an intermediate layer, as shown in Figure 3. For example, let f be a two-layered network with a convolutional layer followed by a ReLU activation function and a fully connected layer. The output of a convolutional layer contains \(\mathrm{H}\times\mathrm{W}\times\mathrm{C}\) scalar _activations_. After applying the ReLU, we obtain another set of \(\mathrm{H}\times\mathrm{W}\times\mathrm{C}\)_activations_. By applying a flatten operation followed by a fully connected layer with D neurons to the output of the convolutional layer, we have another D _activations_. Therefore, there are \(\mathrm{k}=\mathrm{H}*\mathrm{W}*\mathrm{C}+\mathrm{H}*\mathrm{W}*\mathrm{C}+ \mathrm{D}\) activations in this network. In this case, we have ignored the output of the flatten operation. For appropriate cases such as this, a subset of activations can be ignored for the computation of the measure, since their output is simply a reshape of other activations. In other cases, outputs such as perhaps the activations of a convolutional layer before the ReLU is applied can also be ignored to reduce the amount of information to analyze. To measure the invariance of the model f, we measure the invariance of the individual activations \(\mathrm{a_{1}(x),...\,a_{k}(x)}\). Since the measure can be defined for an activation independently of the rest, we will focus on a single activation which we will denote simply as a(x). ### Sample-Transformation activation matrix (ST) In order to facilitate the definition of the measures, we define the concept of Sample-Transformations activation matrices (ST) (Figure 4), which provide the main context and notation for the transformational measures. Given an activation a, a set of n samples \(\mathrm{X}=[\begin{smallmatrix}\mathrm{x_{1}}&\dots&\mathrm{x_{n}}\end{smallmatrix}]\) and a set of m transformations \(\mathrm{T}=[\begin{smallmatrix}\mathrm{t_{1}}&\dots&\mathrm{t_{m}}\end{smallmatrix}]\) defined over X, we can compute the value of a for all the possible transformations of the samples. Let \(\mathrm{x_{i,j}}=\mathrm{t_{j}(x_{i})}\) be the sample obtained by applying transformation \(\mathrm{t_{j}}\in\mathrm{T}\) to input \(\mathrm{x_{i}}\in\mathrm{X}\). Given a, we define the sample-transformations activation matrix \(\mathrm{ST(a,X,T)}\) of size \(\mathrm{n\times m}\) as follows: Figure 4: (a) Matrix of samples and their corresponding transformations given by \(\mathrm{t_{j}(x_{i})}\), for n = 5 samples and m = 4 transformations. (b) Corresponding ST matrix containing the activation values corresponding to each input for a single activation a. Figure 3: Diagram of a network f with its activations. Given an input x, the network computes its output y = f(x), by calculating its activations \(\mathrm{a_{1}(x),...\,a_{6}(x)}\). The final output value y is simply the value \(\mathrm{a_{6}(x)}\). Instead of considering each activation \(\mathrm{a_{i}}\) as a function of the output of other activations that feed into it, the activation is viewed as a function of the original input x. \[\mathrm{ST(a,X,T)_{i,j}=a(x_{i,j})=a(t_{j}(x_{i}))} \tag{4}\] For simplicity, we will refer to \(\mathrm{ST(a,X,T)}\) as \(\mathrm{ST(a)}\) or simply \(\mathrm{ST}\) whenever the context clearly determines a or \(\mathrm{X}\) and \(\mathrm{T}\). Note that \(\mathrm{ST}\) resembles the matrix of observations employed in a one-way ANOVA, where each transformation can be considered as a different _treatment_. For a network \(\mathrm{f}\) with \(\mathrm{k}\) different activations, there are \(\mathrm{k}\) associated \(\mathrm{ST}\) matrices, which together form an \(\mathrm{m}\times\mathrm{n}\times\mathrm{k}\) cube (Figure 5). Note that the result of a traditional forward pass only provides a small subset of the values of the \(\mathrm{ST}\) matrix needed to compute a measure. Therefore the computation of the \(\mathrm{ST}\) matrix must be performed in an online fashion for practical use to avoid storing all the activations for all the combinations of samples and transformations. In the following subsections, we define three invariance measures based on the \(\mathrm{ST}\) matrix, each based on different concepts. The ANOVA measure (section 3.3) uses the traditional analysis of variance procedure to determine if an activation is invariant or not assuming the various transformations are similar to treatments in an ANOVA setting. Variance-based measures [29] (section 3.4) use the common sample variance or standard deviation of each activation to quantify its invariance. Distance-based measures (section 3.6) compare the distance between activations to quantify how much they change under transformation of the inputs. ### ANOVA Measure The Analysis of Variance (ANOVA) is a statistical hypothesis testing method [36]. It is used to analyze samples of different groups. ANOVA can establish if the means for different groups of samples, called treatments, are statistically similar. While ANOVA is a parametric method, it has mild assumptions and is robust to violations of normality, especially with large sample sizes such as those available for machine learning datasets [36]. One-way ANOVA employs a matrix of observations that contain \(\mathrm{n}\) rows and \(\mathrm{m}\) columns, where each row corresponds to a sample, and \(\mathrm{m}\) observations correspond to treatments that have been applied independently. The null hypothesis in ANOVA states that the means are the same for the different treatments/columns. Hence, we can adapt the interpretation of the method for invariance testing. The \(\mathrm{ST}\) matrix can be interpreted as the matrix of observations in one-way ANOVA, where each treatment is a different transformation. Figure 5: a) Cube of activation values for a network. b) The result of a forward pass for a single sample and transformation. c) Slice of the activations cube that corresponds to the \(\mathrm{ST}\) a single output. Therefore, the null hypothesis is equivalent to invariance, since transformations do not affect the activation. On the other hand, if the null hypothesis is rejected for an activation, then it is not invariant. We define the **ANOVA** measure (AM) simply as the application of the one-way ANOVA procedure to the ST matrix of each activation, independently. Therefore, the only parameter of the measure is the significance value of the test, \(\alpha\). While the ANOVA tests are independent for each activation, the number of activations in a NN is quite large. Therefore, in the cases where the invariance of various activations is evaluated in tandem, we must apply a Bonferroni correction [36] to \(\alpha\) to account for the large number of corresponding hypothesis tests. Finally, in order to be consistent with the rest of the measures, when the ANOVA measure detects a variant activation (rejection of null hypothesis) it outputs 1 as a result and 0 otherwise. The choice of invariance as the null hypothesis can be considered strange since in general it is a property models are not assumed to have a priori. However, in many cases, the models that are being measured have been trained or designed for invariance. Therefore, while both approaches have their merits, using invariance as the null hypothesis can be more appropriate in many cases. The computation of the ANOVA requires two iterations over the ST matrix. The first iteration computes the means for each treatment/transformation; the second, the within-group sum of squares. Both iterations are performed iterating first over transformations and then over samples, that is, iterating over the rows of the ST matrix. ### Variance-based Invariance Measures Variance-based invariance measures use the variance of an activation as an approximate notion of invariance. The variance \(\mathrm{Var}\) is a function with range \([0,\infty)\). Therefore, we can consider an activation as exactly invariant to a set of transformations \(\mathrm{T}\) if its variance is 0 with respect to the input \(\mathrm{x}\) after being transformed by elements of \(\mathrm{T}\). Values greater than 0 indicate different degrees of lack of invariance. Since the variance with respect to the transformations depends on the scale of the activations, which can vary between activations, layers, and datasets, we measure two different sources of variance, Transformation Variance and Sample Variance, each computed by varying the transformations and samples, respectively. Afterward, these two variances are combined to obtain a normalized Normalized Variance measure which is dimensionless and can be used to compare the invariance of different activations. The following section describes the three measures and their relationship. #### 3.4.1 Transformation Variance Measure The **Transformation Variance** (TV) of an activation a is defined as the mean variance of an activation, where the mean is computed over the set of samples and the variance over the set of transformations. This is equivalent to computing the average variance of each row in the ST matrix (equation [5]). \[\mathrm{TV}=\mathrm{Mean}\left(\begin{bmatrix}\mathrm{Var}(\mathrm{ST}[1, \mathrm{\cdot}])\\...\\ \mathrm{Var}(\mathrm{ST}[\mathrm{n},\mathrm{\cdot}])\end{bmatrix}\right) \tag{5}\] Where: * \(\mathrm{ST}[\mathrm{i},\mathrm{\cdot}]=\left[\begin{smallmatrix}\mathrm{ST}[ \mathrm{i},1]&\cdots&\mathrm{ST}[\mathrm{i},\mathrm{m}]\end{smallmatrix}\right]\) is a vector containing row \(\mathrm{i}\) of \(\mathrm{ST}(\mathrm{a})\). * \(\mathrm{Var}(\left[\begin{smallmatrix}\mathrm{x}_{1}&\cdots&\mathrm{x}_{n} \end{smallmatrix}\right])=\frac{\sum_{i=1}^{n}\mathrm{x}_{i}-\bar{\mathrm{x}} }{\mathrm{n}-1}\) is the standard sample variance defined over a vector of observations \(\left[\begin{smallmatrix}\mathrm{x}_{1}&\cdots&\mathrm{x}_{n}\end{smallmatrix}\right]\). * \(\bar{\mathrm{x}}=\mathrm{Mean}(\left[\begin{smallmatrix}\mathrm{x}_{1}& \cdots&\mathrm{x}_{n}\end{smallmatrix}\right])=\frac{\sum_{i=1}^{n}\mathrm{x} _{i}}{\mathrm{n}}\) is the standard sample mean. Each row \(\mathrm{i}\) of the ST matrix contains the activations for sample \(\mathrm{x}_{i}\) and all transformations (equation [4] and figure 4). Therefore, the variance is computed over the activations for different transformations; and the mean over samples. Should the activation be completely invariant to \(\mathrm{T}\), all the values in each row would be equal and therefore the variance of each row would be 0. In this way, if an activation is invariant to transformations then its _transformational_ variance is 0. Figure 7 (a) shows a visualization of the values of the Transformation Variance for all activations of the network in terms of a specialized heatmap. The heatmap is organized first by columns, where each column corresponds to a layer. Within each layer/column, there is a cell for each activation of the layer. Cell colors correspond to different levels of invariance. Values in green indicate outliers. Each layer/column can have a different number of activations and therefore rows. Since the order of activations within each layer is arbitrary, there is no row-structure in the image. Only layers of activation functions have the same structure as the previous layer because they act element-wise. For layers that output a set of feature maps (convolutional, max-pooling, etc), we calculate the mean invariance over the spatial dimensions of each feature map. Therefore, if a layer's activation has size \(\mathrm{H}\times\mathrm{W}\times\mathrm{C}\), this tensor is collapsed into a C size vector (see section 3.5 for more details). Using Welford's running mean and variance equations [37], computing the Transformation Variance requires a single iteration through the rows of ST, and therefore the running time is \(\mathcal{O}(\mathrm{k}\times\mathrm{n}\times\mathrm{m})\). The Transformation Variance is measured in units of the activation a. When \(\mathrm{TV(a)}=0\) the activation is invariant to transformations. However, if \(\mathrm{TV(a)}>0\) there is no clear interpretation for that value since the unit of the activation depends on both the samples employed and the parameters of the model, which may vary overmuch as can be observed in Figure 7 (a). To neutralize this unwanted source of variability, we can normalize the Transformation Variance values. We propose using the Sample Variance, as defined below, to divide the Transformation Variance and obtain a Normalized Variance measure. #### 3.4.2 Sample Variance Measure The **Sample Variance** (SV) is the conceptual transpose of the Transformation Variance. It is equivalent to computing the Transformation Variance on the transpose of the ST matrix. Therefore, instead of computing the variance of each row(variance over transformations) and then computing the mean of the result, the Sample Variance is obtained by first computing the variance of each column (variance over samples with the same transformation), and then the mean over the resulting row vector (8). Figure 6: Calculation of the Transformation Variance measure for \(\mathrm{n}=5\) samples and \(\mathrm{m}=4\) transformations. First, (1) the variance of each row (over transformations) is calculated; then, (2) the mean of each column (over samples). Figure 7: (a) Transformation Variance, (b) Sample Variance and (c) Normalized Variance of each activation of the ResNet model. The transformation set for this example is a set of 16 rotations distributed uniformly between 0° and 360°, and the dataset is the test set of CIFAR10. Equation 6 shows the formal definition of the Transformation Variance for an activation a in terms of its ST matrix. \[\mathrm{SV}=\mathrm{Mean}\left(\left[\mathrm{Var}(\mathrm{ST}[:,1])\quad\cdots \quad\mathrm{Var}(\,\mathrm{ST}[:,\mathrm{m}]\,)\right]\right) \tag{6}\] While the Transformation Variance measures the variance due to the transformations of the samples, the Sample Variance measures the variance due to the natural variability of the domain. Figure 7 (b) shows the results of calculating the Sample Variance as a heatmap. Note that the order of magnitude of the values of the Sample Variance is similar to that of the Transformation Variance, and also depends on the layer and activation. Using running mean and variance equations, computing the Sample Variance requires a single iteration through the columns of ST, and therefore the running time is also \(\mathcal{O}(\mathrm{k}\times\mathrm{n}\times\mathrm{m})\). #### 3.4.3 Normalized Variance Measure The **Normalized Variance** (NV) is simply the ratio between the Transformation Variance (equation 5) and the Sample Variance (equation 6) 2: Footnote 2: Note that the Normalized Variance measure corresponds to the V measure previously described by the authors in [29] \[\mathrm{NV(a)}=\frac{\mathrm{TV(a)}}{\mathrm{SV(a)}}=\frac{\frac{1}{n}\sum_{i= 1}^{\mathrm{n}}\mathrm{Var}(\mathrm{ST(a)[i,:]})}{\frac{1}{\mathrm{m}}\sum_{i= 1}^{\mathrm{m}}\mathrm{Var}(\mathrm{ST(a)[:,i]})} \tag{7}\] The Normalized Variance is therefore a ratio that weights the variability due to the transformation with the sample variability. Since both have the same unit, the result is a dimensionless value. Figure 7 (c) shows the result of the Normalized Variance measure. The computation of NV requires only two loops over the ST matrix, a transformation-first loop to compute the Transformation Variance, and a samples-first loop to compute the Sample Variance. Therefore, its running time is also \(\mathcal{O}(\mathrm{k}\times\mathrm{n}\times\mathrm{m})\). Interpretation of values of the Normalized VarianceWe can analyse the possible values of NV as follows: * If \(\mathrm{NV(a)}=0\), then \(\mathrm{TV(a)}=0\) and the activation is clearly invariant. * If \(\mathrm{NV(a)}<1\), the variance due to the transformations is less than that due to the samples, and so we can consider the activation to be approximately invariant. * If \(\mathrm{NV(a)}>1\) the same reasoning applies, but with the opposite conclusion. * If \(\mathrm{NV(a)}=1\), then both variances are in equilibrium, and there's no distinction between sample and transformation variability. In this case, it is possible that the dataset/domain naturally contains transformed samples, or simply that the model was trained in such a way that these values are similar. Note that we can only interpret NV(a) in terms of relative invariance. For example, if \(\mathrm{NV(a)}=0.5\), then the sample variance is twice the transformation variance. Figure 8: Calculation of the Sample Variance measure for \(\mathrm{n}=5\) samples and \(\mathrm{m}=4\) transformations. First, the variance of each column (over samples) is calculated; then, the mean of each row (over transformations). Special cases in the computation of the Normalized Variance measureIn cases where both \(\mathrm{TV(a)}=0\) and \(\mathrm{SV(a)}=0\), we have the very definition of dead activations which do not respond to any pattern. Therefore these have no use for the network, and so we set \(\mathrm{NV}=1\) as a compromise. Alternatively, when \(\mathrm{SV(a)}=0\) but \(\mathrm{TV(a)}>0\) we set \(\mathrm{NV(a)}=+\infty\). This case is similar to the previous one but the transformations do make the activation vary in this case, possibly because they generate samples outside of the original distribution, and therefore indicate anomalous behavior in the activation. Both cases can occur in common datasets, especially if they are synthetic or have been heavily preprocessed. For example, if all images in a dataset contain a black background and centered objects, it is likely that activations corresponding to the borders of feature maps, especially in the first layers, are trained to expect black pixels. However, transformations such as scale or translation can cause a shift in distribution so that the borders of some transformed images now contain non-black pixels. Hence the feature maps will possibly have non-zero activations in their edges so that \(\mathrm{SV(a)}\sim 0\) but \(\mathrm{TV(a)}>0\). Alternative definitions of \(\mathrm{NV}\)The definition of \(\mathrm{NV}\) may seem arbitrary at first. Indeed, it would be possible, for example, to further transform the result of the measure, for example using a logistic function to squash the measure to the interval \([0,1)\), so that \(\mathrm{NV(a)}=0.5\) indicates equilibrium between \(\mathrm{TV}\) and \(\mathrm{SV}\). We deliberately choose not to do this in order to preserve the notion that we lack a proper theory of approximate invariance to interpret these values. Also, we preserve the definition in which \(\mathrm{NV(a)}=0\) indicates invariance, since it reinforces the notion that we are calculating the _variance_ of the activations. We considered other alternatives such as \(\mathrm{NV(a)}=\mathrm{logistic(TV(a)-SV(a))}\), but prefer the definition in which \(\mathrm{NV}\) can be interpreted as a ratio. Since the variance or standard deviation depends on the scale of the activations, using the difference \(\mathrm{TV(a)-SV(a)}\) would still tie the value of the measure to the scale. The coefficient of variation \(\frac{\sigma}{\mu}\) could also be a viable alternative, but it would pose difficulties for many networks that are designed so that \(\mu=0\) for their activations. Finally, for numerical stability reasons, we can replace the variance function for the standard deviation for the actual computation of the measure with only a slight overhead and no difference in its interpretation. ### Measure specialization for Feature maps Some types of layers can require specialization of the measures to obtain more useful results. Convolutional layers currently provide state of the art performance for several types of data including images. We describe a specialization of the variance measures for 2D convolutional layers since these are typically used with images; generalizing to 1D or ND convolutions is simple from this particular case. Typical 2D convolutional layers output \(\mathrm{K_{f}}\) feature maps, each of size \(\mathrm{H}\times\mathrm{W}\). Therefore, the number of individual activations is \(\mathrm{K_{f}}\times\mathrm{H}\times\mathrm{W}\), which can be considerably large for inputs with high spatial resolution. More importantly, the activations of a feature map have a spatial structure, which if ignored can yield uninteresting or incorrect results. For example, for object classification, the borders of the feature map usually yield little information, and analyzing them individually for invariance can be misleading or uninteresting. Alternatively, we can measure the variance of feature maps by first aggregating the variance over the spatial dimensions. Given feature map \(\mathrm{F}\) of size \(\mathrm{H}\times\mathrm{W}\) such that \(\mathrm{F(i,j)}\) is the activation in the \(\mathrm{i,j}\) spatial coordinates, we can define \(\mathrm{TV(F)}\) and \(\mathrm{SV(F)}\) as (see Equation 8): \[\mathrm{TV(F)} =\frac{1}{\mathrm{H}\times\mathrm{W}}\sum_{\mathrm{i}=1}^{\mathrm{ H}}\sum_{\mathrm{j}=1}^{\mathrm{W}}\mathrm{TV(F(i,j))}=\mathrm{E} \tag{8}\] \[\mathrm{SV(F)} =\frac{1}{\mathrm{H}\times\mathrm{W}}\sum_{\mathrm{i}=1}^{\mathrm{ H}}\sum_{\mathrm{j}=1}^{\mathrm{W}}\mathrm{SV(F(i,j))}\] \[\mathrm{NV(F)} =\frac{\mathrm{TV(F)}}{\mathrm{SV(F)}}\] Note that in the case of the NV measure, the aggregation is done at the level of the TV and SV measures, before the normalization. The alternative definition \(\mathrm{NV}_{\mathrm{after}}\) (equation 9) is also possible but introduces potential problems, since individual ratios \(\frac{\mathrm{TV(F(i,j))}}{\mathrm{SV(F(i,j))}}\) can vary wildly and produce less consistent values. \[\mathrm{NV}_{\mathrm{after}}(\mathrm{F})=\frac{1}{\mathrm{H}\times\mathrm{W}} \sum_{\mathrm{i=1}}^{\mathrm{H}}\sum_{\mathrm{i=1}}^{\mathrm{H}}\sum_{\mathrm{ j=1}}^{\mathrm{W}}\mathrm{NV(F(i,j))} \tag{9}\] We chose to aggregate the variances of the feature map via the mean of the measures of each activation in the feature maps, so that \(\mathrm{TV(F)}\) and \(\mathrm{SV(F)}\) represent its mean variance. Alternatives for the aggregation could include max, min, sum or other such functions. Since feature maps are generally sparse[38], and given that filters may be active only in certain spatial regions, aggregating activations via the min of the activations instead of the mean would significantly underestimate the variance of the feature map; choosing max may cause the inverse problem. Note that for the NV measure, the sum would yield the same result as the mean (equation [8]), but not for TV or SV; in those cases, using the mean might be useful to disentangle the value of the measure from the size of the feature map. ### Distance-based Invariance Measures The previously shown variance-based measures use the variance as an indicator of invariance, quantifying deviations from the mean. This implies the assumption that the distribution of activations is roughly unimodal. However, in some cases, that assumption may be incorrect. For example, when the samples are drawn from different classes of objects, some activations may be triggered only by some subset of the classes. Distance-based measures are similar to variance methods, but instead of calculating the variance, they employ a distance function between activations for different transformations or samples. By computing the distance between all pairs of activations, there are no assumptions of unimodality, and an appropriate distance function can be employed for different types of activations. In the same way as with the variance measures, we define the TransformationDistance (TD), SampleDistance (SD) and NormalizedDistance (ND) measures using distance functions as follows (Equation 10) \[\begin{split}\mathrm{TD(a)}&=\mathrm{Mean}\left( \left[\,\mathrm{D(ST[1,:])}\,\cdots\,\mathrm{D(ST[n,:])}\,\right]\right)\\ \mathrm{SD(a)}&=\mathrm{Mean}\left(\left[\,\mathrm{D( ST[\cdot,1])}\,\cdots\,\mathrm{D(ST[\cdot,m])}\,\right]\right)\\ \mathrm{ND(a)}&=\frac{\mathrm{TD(a)}}{\mathrm{SD(a) }}\end{split} \tag{10}\] Where D computes the mean distances between all values of a vector. Given an arbitrary distance function \(\mathrm{d}:\mathrm{R}^{2}\rightarrow\mathrm{R}\), D is defined as: \[\mathrm{D(}[\,\mathrm{x_{1}}\,\cdots\,\mathrm{x_{n}}\,]\,)=\frac{\sum_{ \mathrm{i=1}}^{\mathrm{n}}\sum_{\mathrm{j=1}}^{\mathrm{n}}\mathrm{d(x_{i},x_{ j})}}{\mathrm{n}^{2}} \tag{11}\] The distance measures calculate the average pairwise distances between activations either row-wise (Transformation Distance) or column-wise (Sample Distance), analogously to the Transformation Variance and Sample Variance measures. If an activation is completely invariant to a transformation, the mean distance between the activations for all transformations of a sample (\(\mathrm{D(ST[i,:])}\)) will be 0. Therefore the Transformation Distance will be 0, and so will the Normalized Distance. If the activation is approximately invariant, the mean distance between transformed samples will quantify this, and the mean distance between samples will quantify the degree of invariance. Approximation of the average distanceThe full computation of all distances between transformations and samples can be prohibitive. Such distance matrix would have size \(\mathrm{n}\times\mathrm{n}\) for the sample variance and \(\mathrm{m}\times\mathrm{m}\) for the transformation variance. While the mean distance does not require storing all distance values, it does require storing all k activations. As discussed before, the computation of the ST matrix must be done online. Therefore, we must employ an approximation to compute the mean distances. Since looping over the ST matrix is done by batches, it is straightforward to only compute distances for samples in the same batch. In this way, we approximate the mean of the full distance matrix by only computing the mean of the distances between blocks of the distance matrix. A more principled approach would involve computing a low-rank approximation of the full euclidean distance matrix and then computing the mean distances [39]. However, the current best randomized algorithms for a single matrix are \(\mathcal{O}(\mathrm{n}+\mathrm{m})\)[40], which would render the computation of the measure impractical given a large number k of activations. #### 3.6.1 Running time Analogously to the Normalized Variance case, the computation of the Normalized Distance requires two iterations over the ST matrix, one for the Transformation Distance and another for the Sample Distance. Given that in this case we are computing distances between all elements of a batch, the algorithm is \(\mathcal{O}(\mathrm{b}^{2}\times\frac{\mathrm{n}\times\mathrm{m}}{\mathrm{b}} \times\mathrm{k})\), where \(\mathrm{b}\) is the batch size and \(\mathrm{k}\) the number of activations. Distance-based measures have an additional runtime factor \(\mathrm{b}\), compared with variance-based measures. The greater the value of \(\mathrm{b}\), the better the approximation, but also the running time and storage, since the computation of the mean pairwise distance is \(\mathrm{O}(\mathrm{b}^{2})\). Figure 9 shows examples of ram usage for typical networks. The batch size \(\mathrm{b}\) is strongly limited by the amount of RAM memory required to store activations, whether in CPU or GPU. Therefore, the approximation is, in principle, limited by the storage capacity. Nevertheless, several passes can be made to improve the approximation if required. By changing the order of iteration of the samples or transformations and aggregating the results of each pass, we can effectively sample more values of the distance matrix. Consequently, distance-based measures can choose the batch size independently from the desired number of samples for the approximation. Relationship between variance and squared euclidean distanceIn the case of the squared euclidean distance measure, it is well known that the variance and distance computations are equivalent, and so are the measures, as per equation [12]. In this particular case, we can avoid the approximation introduced by sparsely sampling the distance matrix by simply computing variance-based measures. Figure 9: Maximum RAM usage for the calculation of the Normalized Variance and Normalized Distance, assuming _float32_ precision for activations. \[\begin{split}\text{D}([\begin{smallmatrix}\text{x}_{1}& \text{...}&\text{x}_{n}\end{smallmatrix}])&=\frac{\sum_{i=1}^{ \text{n}}\sum_{j=1}^{\text{n}}(\text{x}_{i}-\text{x}_{j})^{2}}{\text{n}^{2}}\\ &=2\text{n}\frac{\sum_{i=1}^{\text{n}}(\text{x}_{i}-\text{Mean}([ \begin{smallmatrix}\text{x}_{1}&\text{...}&\text{x}_{n}\end{smallmatrix}]))^{2}}{ \text{n}^{2}}\\ &=2\frac{\sum_{i=1}^{\text{n}}(\text{x}_{i}-\text{Mean}([ \begin{smallmatrix}\text{x}_{1}&\text{...}&\text{x}_{n}\end{smallmatrix}]))^ {2}}{\text{n}}\\ &=2\frac{\sum_{i=1}^{\text{n}}(\text{x}_{i}-\text{Mean}([ \begin{smallmatrix}\text{x}_{1}&\text{...}&\text{x}_{n}\end{smallmatrix}]))^ {2}}{\text{n}}\\ &=2\,\text{Var}([\begin{smallmatrix}\text{x}_{1}&\text{...}&\text{x}_{n} \end{smallmatrix}])\end{split} \tag{12}\] _Feature maps._ As mentioned before, measuring the invariance of each activation of a feature map may be undesirable, since the feature map has a spatial structure. The method described for the Normalized Variance measure to calculate the variance of feature maps (Section 3.5) is also valid for the distance measure (equation 13). That is, we can calculate the distance measure of each individual activation \(\text{ND}(\text{F}(\text{i},\text{j}))\) in a feature map \(\text{F}\) and then sum the individual measures. \[\begin{split}\text{TD}(\text{F})&=\frac{1}{\text{H} \times\text{W}}\sum_{i=1}^{\text{H}}\sum_{j=1}^{\text{W}}\text{TD}(\text{F}( \text{i},\text{j}))\\ \text{SD}(\text{F})&=\frac{1}{\text{H}\times\text{W} }\sum_{i=1}^{\text{H}}\sum_{j=1}^{\text{W}}\text{SD}(\text{F}(\text{i},\text{ j}))\\ \text{ND}(\text{F})&=\frac{\text{TD}(\text{F})}{ \text{SD}(\text{F})}\end{split} \tag{13}\] The advantage of a distance-based measure, however, is that we can use specialized distances for each type of activation or layer. Note that these distances can now be not between individual activations of a feature map, but between entire feature maps. Therefore, we may employ any image-based distance measure. For example, we can compare entire feature maps with a semantic distance measure such as the Frechet-Inception distance [41] and obtain pairwise distances between them. Let \(\text{F}\) be a feature map, and \(\text{ST}(\text{F})\) be a \(\text{n}\times\text{m}\) matrix as before, but now each element \(\text{ST}(\text{F})[\text{i},\text{j}]\in\text{R}^{\text{h}\times\text{w}}\) consists of the feature map calculated from sample i after applying transformation j. Then we can define TD, SD and ND in a similar fashion as equation [10], but for feature maps \(\text{F}\) (equation [14]). \[\begin{split}\text{TD}(\text{F})&=\text{Mean}\left( [\begin{smallmatrix}\text{D}(\text{ST}(\text{F})[1,\text{j}])& \text{...}&\text{D}(\text{ST}(\text{F})[\text{n},\text{j}]) \end{smallmatrix}]\right)\\ \text{SD}(\text{F})&=\text{Mean}\left([\begin{smallmatrix} \text{D}(\text{ST}(\text{F})[\text{i},\text{1}])&\text{...}&\text{D}( \text{ST}(\text{F})[\text{i},\text{m}])\end{smallmatrix}]\right)\\ \text{ND}(\text{F})&=\frac{\text{TD}(\text{a})}{ \text{SD}(\text{a})}\end{split} \tag{14}\] Where now d is a distance function between feature maps (\(\text{R}^{\text{h}\times\text{w}}\)) instead of real numbers: \[\text{D}([\begin{smallmatrix}\text{F}_{1}&\text{...}&\text{F}_{n} \end{smallmatrix}])=\frac{\sum_{i=1}^{\text{n}}\sum_{j=1}^{\text{n}}\text{d}( \text{F}_{i},\text{F}_{j})}{\text{n}^{2}} \tag{15}\] ## 4 Validation of the measures In this section, we validate the Normalized Variance measure in terms of its ability to detect invariances in the models and compare it to the Goodfellow and ANOVA measures. We also present results that show the need for the normalization scheme in the Normalized Variance measure. Finally, we measure its sensitivity to random initializations of the models' weights, to ensure that the results of the measure do not depend overmuch on the training procedure itself and specific final weights, but on the choice of model, dataset, transformations, and other hyperparameters. We focus on the Normalized Variance measure because it is the most efficient. Also, given that the Normalized Distance can be seen as an approximation of the Normalized Variance for the euclidean distance, many conclusions about the Normalized Variance will be true for the Normalized Distance as well. First, we perform qualitative and quantitative experiments to determine if indeed the Normalized Variance measure is capable of measuring the desired property (invariance). Afterward, we study the general behavior of the measure in terms of its dependence on weight initialization and the dataset and transformation set sizes. While the measures can be employed to analyze any type of model or input type, in this work we focus on image classification problems since transformations in this domain are more easily understood and defined, particularly affine transformations, using CNNs. The general methodology of our experiments (figure 10) consists of training a model with a given dataset A and a set of transformations B used for data augmentation to force the network to acquire invariance to B during training [42; 22]. Afterward, we evaluate measures using the trained model with another dataset A' and a set of transformations B'. We note that in many cases, A=A' and B=B', so that the same datasets and transformations are used both for training and measuring. We now describe the datasets, transformations, and models used in the experiments and analyses. All experiments can be replicated with code available at [https://github.com/facundoq/transformational_measures_experiments](https://github.com/facundoq/transformational_measures_experiments). ### Experimental setup #### 4.1.1 Datasets All of our experiments use MNIST and CIFAR10. Both datasets are well known and we expect any analysis performed on them is easy to understand and relate to existing methods. Also, both are small datasets to ease the computational burden. While MNIST is somewhat toy-like, it provides more interpretable results. Since all models obtain an accuracy near 100 for the test set on MNIST, this dataset allows evaluating the results of the measure in a near-perfect accuracy scenario. CIFAR10, on the other hand, consists of more complex natural images that complement the analysis, since the model achieves accuracies around 75% in most cases. Invariance can be measured using the training or test subsets, in the same way as other metrics such as accuracy or mean squared error. For invariance, however, it is not clear that the test subset is always more appropriate to understand a model. The training set invariance shows how the model learned the invariance of the model, and the test set invariance shows how this model's invariance representation generalizes to new data from a similar distribution. Nonetheless, we have experimented measuring invariance with the train and test subsets (not shown for brevity) and we have found that for the MNIST and CIFAR10 datasets the invariance measures yield the same results for both subsets. Therefore, we choose to measure the invariance on the standard test sets of MNIST and CIFAR10, which can provide stronger assurances on the generality of the invariance of the model. Figure 10: General diagram for our experimental methodology. We train a model using a dataset A applying data augmentation with transformation set B. The resulting model is therefore invariant to B. Afterwards, we measure the invariance of the trained model using dataset A’ and transformation set B’. In most experiments, the training and measuring datasets are the same, as well as the transformations A and B. #### 4.1.2 Transformations We chose three common transformation sets used throughout the experiments: rotations, scalings, and translations. These sets represent common affine transformations and therefore provide a wide range of diversity in order to establish properties of the measures independently of the specific transformations used. In all cases, the transformation sets include the identity transformation. 1. Rotation (25 transformations), discretized into 25 distinct angles (including 0deg). Rotations are always with respect to the center of the image. 2. Scaling (25 transformations). We scaled images by a set of 8 scale factors chosen uniformly from \((0.5,1.25)\). We generate all possible combinations of these factors so that for each factor s, we scale the image by \((1,\mathrm{s})\), \((\mathrm{s},1)\) and \((\mathrm{s},\mathrm{s})\), where \((\mathrm{s}_{\mathrm{h}},\mathrm{s}_{\mathrm{w}})\) indicate the scaling factors for the height and width of the image, respectively. Thus, we modify the aspect ratio in \(\frac{2}{3}\) of the transformations. Note that we scale the contents of the image but the size is kept constant. When downscaling, we fill the borders with reflections instead of filling them with a constant color to maintain the original distribution of the pixels as much as possible. The scale factors are chosen asymetrically (0.5 vs 1.25) since upscaling the image by more than 1.25 tends to remove important parts of the object. 3. Translation (25 transformations): We used 3 translation factors: 15%, 10% and 5%. For each translation factor t, we translate the images by \([\,(-\mathrm{t},-\mathrm{t})\,(-\mathrm{t},\mathrm{t})\,(\mathrm{t},-\mathrm{ t})\,(\mathrm{t},\mathrm{t})\,(0,\mathrm{t})\,(0,-\mathrm{t})\,(-\mathrm{t},0)\,]\),for a total of \(8*3=24\) non-identity translation transformations. We will refer to these simply as the rotation, scale, translation sets. Figure 11 shows examples of all sets for MNIST and CIFAR10. Figure 11: Samples of MNIST (top row) and CIFAR10 (bottom row) for each set of transformations. #### 4.1.3 Models For most of the experiments we use the **SimpleConv** model, shown in figure 12. It is a simple model consisting of traditional Convolution (Conv), MaxPooling (MaxPool), and Fully Connected (FC) layers. The activation functions are all ELU, except for the final Softmax. All convolutions have \(\text{stride}=1\) and \(\text{kernelsize}=3\times 3\). MaxPooling layers are 2D and use \(\text{stride}=2\) and \(\text{kernelsize}=2\times 2\). SimpleConv was the simplest model we found with only three layers that obtains \(~{}80\%\) accuracy in CIFAR, which is not state of the art but of similar accuracy to other more complex models. Limiting the design to only these layers applied in a feedforward fashion also facilitates the analysis. Since our goal is not to obtain state-of-the-art accuracies, to prioritize consistency and simplicity we employed the AdamW optimizer [43] with a learning rate of \(10^{-4}\) to train the models. The number of epochs used to train each model was determined separately for every dataset and set of transformations to ensure the model converges. To this effect, a base number of epochs was chosen for every model/dataset combination. To account for the difficulty of learning more transformations of the data augmentation of the training set, that number of epochs was multiplied by \(\log(\text{m})\), where m is the size of the transformation set. Given that CIFAR10 is a more challenging dataset than MNIST, the models for MNIST have been modified to use half the number of filters/features than for CIFAR10 in all layers. Since effective invariance cannot be separated from accuracy, we verified that in all cases the accuracy of the models was superior to \(95\%\) on MNIST, and to \(75\%\) on CIFAR10. #### 4.1.4 Visualization While comparing the full result of the measure (as a heatmap or other representation) between different models would yield the most detailed information, comparing individual activations of different trained models is very hard given that their function changes for different sets of trained weights. Since our objective is to compare measures/models and many experiments require different trained instances of the same model, we present the results of the measures aggregated by layers, which are more stable in their function. The aggregation consists of computing the mean value of the measure for all activation of each layer, and we plot the resulting means, as show in figure 14. This aggregation, therefore, allows a high-level view of the invariance of a model, which is ideal for comparing the results of different models. ### Comparison of measures We compare the invariance values for different measures to gauge their differences. Figure 15 shows the Normalized Variance (NV), Goodfellow (GF) and ANOVA measures on the SimpleConv model. Figure 12: Architecture of the SimpleConv model. The model is a typical CNN with convolutional, max-pooling and fully connected layers. The ANOVA measure is insensitive to the variance of the model. It rejects the null hypothesis in almost all cases and therefore considers all activations as variant (value of 1). We note that to compute the ANOVA measure we used a Bonferroni correction with \(\alpha=0.99\) to account for the multiple hypotheses tested. The Goodfellow measure shows less invariance for convolutional layers than for the subsequent activation function. As we show in appendix A, activation functions such as ELU never increase the variance, and therefore this suggests that the Goodfellow measure may not be a good indicator of the invariance of the model. We note that in order to calculate the Goodfellow measure [7] in a reasonable time, we adapted the original algorithm to determine the threshold t so that instead of calculating the 1% percentile, we calculate the value z for which a normal distribution satisfies \(\mathrm{P}(\mathrm{f}\leq\mathrm{z*})=0.01\). Afterward, we employ z* as the threshold t in the original algorithm. In this way, we avoid having to store all activations to calculate the percentile, which would be computationally prohibiting for large models, datasets, or transformations sets. Nonetheless, the interpretation remains the same since this alternative may change the value of the threshold but not the way the measure is computed. The ANOVA measure is very sensitive to violations of invariance and therefore not very useful for measuring that property. However, it does serve as a principled first approach to defining an invariance Figure 14: Values of the Normalized Variance measure for the SimpleConv model trained and measured with the MNIST dataset and rotation transformations. The results are visualized via a heatmap showing the full set of values (left) and a plot showing the mean value for each layer (right). Figure 13: Accuracies for the SimpleConv model on MNIST and CIFAR10 for the 3 sets of transformations. measure. ### Normalization of Normalized Variance The Transformation Variance measure by itself is a useful measure of invariance. However, its values are not normalized in any way and comparisons between models or layers can be difficult. Figure 16 shows the results of the Transformation Variance and Sample Variance measures. The models were trained with data augmentation with a set of transformations T and then the TransformationVariance and SampleVariance were also calculated with respect to T. The magnitudes of both measures are similar for the same layer but quite different between datasets and mildly different between transformations. We note as well that the magnitudes vary significantly across layers. The units of the activations of convolutional layers are significantly lower than those of fully connected layers, hence the lower variance. Therefore, comparing the values of the Transformation Variance for convolutional and linear layers is difficult. The variance for models on MNIST is also much lower than the variance for models trained on CIFAR10. This is expected given that the background for MNIST images is much more uniform but still makes comparisons between datasets difficult. This confirms our claims in section 3 on the importance of normalizing the Transformation Variance to obtain values that are interpretable across layers, datasets, and transformations. ### Dependence on size of dataset and transformations We analyze the Normalized Variance measure in terms of the datasets and transformation sets used to measure it, varying their size systematically and independently. In this way, we can gain an initial understanding of the computational requirements of the measures. In this case, we consider as reference the value of the measure computed with 2304 samples and transformations. As figure 17 shows, the relative error between computing the measure with both 2304 samples and transformations (\(\sim 5300000\) values in the ST matrix) is at most 10.6% with respect to computing the measure with only 24 samples and transformations (\(\sim 600\) values in the ST matrix). As a middle ground, choosing 385 samples and transformations yields a relative error of at most %1.5 with only \(\sim 160000\) values in the ST matrix, which seems a reasonable tradeoff. We also note that errors always decrease when increasing either the number of samples or transformations, Figure 15: Comparison of normalized measures Normalized Variance (NV), Goodfellow and ANOVA for the SimpleConv model. indicating a convergence with larger sizes. This indicates that the measure is well behaved in its dependence on the size of the dataset and transformation set. ### Correlation between invariance and measures While an invariance measure can be theoretically sound, it may not be useful in practice if has poor sensitivity to the changes in the model's invariance. However, there are no previously proposed methods or tests to validate the appropriateness of an invariance measure. In order to verify the correlation of the Normalized Variance measure, we use the fact that models trained without data augmentation have very low accuracy when evaluated on transformed samples[29]. Models trained _with_ data augmentation, on the other hand, mostly recover the lost performance. Therefore, we expect the latter models to possess more invariance in their activations, or at the very least in the final Softmax layer [42; 22]. In this way, we can determine if the measures are sensitive to small changes in invariance, and furthermore, if they show a positive correlation between the amount of data augmentation in training and the measured invariance. With these assumptions, we can train different models with subsequently more complex transformations of the same type. Then, we measure the invariance of each model with respect to the most complex of these transformations. Ideally, the measure should detect the increase of invariance in the subsequent models. Figure 18 shows the results for the different training transformations complexity for the Normalized Variance measure. The measure was always evaluated with the most complex of the transformation sets. We can observe that the Normalized Variance measure captures the increasing invariance of the model's activations. The change in invariance appears to occur mostly in the linear layers and occasionally but less significantly in the last convolutional layers. The Goodfellow measure (figure 19), on the other hand, fails to capture this relationship, and even more, it measures _less_ invariance for models trained with data augmentation in many cases. ### Stability of the Normalized Variance measure To study the effect of the random initialization of the model on the stability of the measure, we trained N=30 instances of each model using the same architecture, dataset, and set of transformations, until con Figure 16: Comparison of unnormalized measures Transformation Variance and Sample Variance for the SimpleConv model. The scale of the values of the measure differ with the choice of dataset and transformation set. Figure 17: Relative error between the Normalized Variance of the same trained model, type of transformations and data, but varying the size of the transformation set and dataset. The relative error is computed with respect to the values of the measure computed with the largest transformation set/dataset combination. Figure 18: Normalized Variance measure of SimpleConv models trained with different sets of transformation (including a singleton set with the identity transformation, which corresponds to no data augmentation). The models were evaluated with the most complex transformation set in each case. vergence. Each model was initialized with different random weights. Afterward, each trained model was evaluated with the Normalized Variance measure. Since comparing the representations of each model instance directly is difficult [44], we perform a N-Sample Anderson-Darling test to verify if the distribution of invariance of the N models is the same or if it differs [45]. The tests are performed by layer, so that we obtain a p-value for each layer. A higher p-value indicates that the distribution for that layer is not the same for all models. Note that p-values are expressed in terms of type-I errors, since the null hypothesis is that all N models have the same distribution of invariance. However, given the large number of models we are comparing, the chances of finding any two models with a different distribution grows as the number of models increases. Therefore, by using a large N in this case we are also substantially reducing the chance of type-II errors. The results (figure 20) suggest that the measures vary very slightly with respect to the initialization. Only a few cases we encounter p-values larger than 0.1%. This indicates that the pattern of mean invariance per layer is an emergent property despite the random initialization of the network weights. Therefore, this pattern might mostly depend only on the model architecture, dataset, and transformation. In any case, the stability of the measures with respect to the initialization is a desirable property, given that it is not always computationally possible to repeatedly train models to achieve a greater degree of certainty of the invariance of a model. Also, note that the pattern of invariance does vary when changing either the transformation or dataset. This indicates that different transformations may require different ways of encoding the invariance, and also that the invariance to a set of transformations is actually specialized for the a specific dataset. This has consequences in terms of the transferability of the representations. Random WeightsWhile evaluating the measure with different trained models indicates if it is stable for different initializations, it is also interesting to understand the base invariance of an untrained model and if it varies significantly for different initializations. Therefore, we measured the Normalized Variance of N = 30 different initializations of the same model. As figure 21 shows, similarly to the trained case, the values of the measure for different untrained weights does not differ significantly. This reinforces the view that the Figure 19: Goodfellow measure of SimpleConv models trained with different transformations (including a singleton set with the identity transformation, which corresponds to no data augmentation. The models were evaluated with the most complex transformation in each case. distribution of invariance of each layer is dictated mostly by the model, dataset, and transformation set, and not the specific set of weights of the model. Figure 21: Normalized Variance measure for _untrained_ models with different initializations. Each color line in each plot corresponds to the invariance obtained by different models with the same architecture. The dashed line black shows the mean values. Vertical black bars indicate standard deviation. Figure 20: Stability of the Normalized Variance measure for the SimpleConv model. Each color line in each plot corresponds to the invariance obtained by different models with the same architecture. The dashed line black shows the mean values. Vertical black bars indicate standard deviation (very low in most cases). Additionally, Invariance while trainingTo complement this view, we can attempt to understand the dynamics of _learning_ invariances. Therefore, we computed invariance measures at regular intervals while the models were being trained. The regular intervals were chosen as percentages of the total number of epochs. We use the percentages 0%, 1%, 3%, 5%, 10%, 30%, 50%, 100% for all datasets/transformations. Figure 22 shows the distribution of the invariance over the layers of a model while it is being trained. In all cases, we can observe that after the first one or two epochs, the invariance structure of the network remains mostly fixed. Also, this structure is different from the structure of the networks at epoch 0, that is, with random, untrained weights. The variance of the lower convolutional layers changes only slightly throughout the training. That of the final convolutional layers and the FC layers does change more significantly during training, mostly becoming smaller. In the case of MNIST, this transition is much weaker faster for CIFAR10, possibly because of the different complexities of the dataset. Recalling the stability of the measure for the final values of the weights, the results indicate that given a fixed dataset, transformation, and model, the distribution of invariance before and after training is quite similar. Therefore, training with data augmentation forces, through the loss function, models to converge to a similar invariant representation despite the initial random initialization. ## 5 Conclusions and future work Invariance can be a useful or necessary property for a network in different components of its architecture. However, there are several ways to achieve and encode invariance in a network, and there is a need to understand their differences. This work proposes invariance measures that enable important tools to investigate these encodings. The measures can be applied to any neural network model, dataset and finite transformation set. Along with the measures, we propose experimental methods that take advantage of the network to analyze non-trivial properties of the networks and their encoding. In this case, our measure can quantify the invariance in a simpler way that is more interpretable and sensitive than previous approaches. Figure 22: Comparison of the Normalized Variance measures for the SimpleConv model. Each line in each plot corresponds to the same model in a different epoch. Percentages indicate the corresponding test set accuracy in that epoch. For MNIST, train percentages 0% and 1% were mapped to epoch 0, and 5% and 10% to epoch 1, because of the low number of epochs. Our main findings show that the measures are indeed able to effectively quantify the invariance of a network. Also, we have shown that the measures are efficient in terms of sample and transformation set sizes, and stable with respect to the random initialization of network weights. Furthermore, our first analysis of a simple CNN model, made invariant by data augmentation, revealed interesting facts about this architecture. For instance, the structure of the invariance of the layers converges to the same distribution, even when the network's weights are initialized randomly. Also, the invariance structure of randomly initialized networks without training is very similar. This indicates that the encoding of invariance achieved by data augmentation can be stable and reliable. In future work, we expect to expand the set of measures following a similar approach to also quantify same-equivariance and equivariance. Another limitation of the presented measures is that they focus on individual activations to compute the measures, and aggregate these results to perform global analysis in a simple fashion. It would be interesting to extend these measures to allow them to automatically detect the transformational structures of the network in terms of groups of activations. This would allow the analysis with intermediate granularities that lie between analyzing a whole network, layer, or individual activations. Finally, while the measures apply to any type of neural networks, transformations and datasets, in this work we restricted ourselves to well-known instances of these objects, such as CNNs, affine transformations, and the MNIST and CIFAR datasets. Therefore, we are also interested in widening the set of models, transformations, and domains in which to apply the measure, ranging outside the set of affine transformations and image classification problems. ## Acknowledgments The Titan X Pascal used for this research was donated by the NVIDIA Corporation.
2308.12722
Accelerated Neural Network Training through Dimensionality Reduction for High-Throughput Screening of Topological Materials
Machine Learning facilitates building a large variety of models, starting from elementary linear regression models to very complex neural networks. Neural networks are currently limited by the size of data provided and the huge computational cost of training a model. This is especially problematic when dealing with a large set of features without much prior knowledge of how good or bad each individual feature is. We try tackling the problem using dimensionality reduction algorithms to construct more meaningful features. We also compare the accuracy and training times of raw data and data transformed after dimensionality reduction to deduce a sufficient number of dimensions without sacrificing accuracy. The indicated estimation is done using a lighter decision tree-based algorithm, AdaBoost, as it trains faster than neural networks. We have chosen the data from an online database of topological materials, Materiae. Our final goal is to construct a model to predict the topological properties of new materials from elementary properties.
Ruman Moulik, Ankita Phutela, Sajjan Sheoran, Saswata Bhattacharya
2023-08-24T11:51:20Z
http://arxiv.org/abs/2308.12722v1
Accelerated Neural Network Training through Dimensionality Reduction for High-Throughput Screening of Topological Materials ###### Abstract Machine Learning facilitates building a large variety of models, starting from elementary linear regression models to very complex neural networks. Neural networks are currently limited by the size of data provided and the huge computational cost of training a model. This is especially problematic when dealing with a large set of features without much prior knowledge of how good or bad each individual feature is. We try tackling the problem using dimensionality reduction algorithms to construct more meaningful features. We also compare the accuracy and training times of raw data and data transformed after dimensionality reduction to deduce a sufficient number of dimensions without sacrificing accuracy. The indicated estimation is done using a lighter decision tree-based algorithm, AdaBoost, as it trains faster than neural networks. We have chosen the data from an online database of topological materials, Materiae. Our final goal is to construct a model to predict the topological properties of new materials from elementary properties. ## I Introduction Topology is a modern branch of mathematics that emerged in the early 1900s [1]. It is a qualitative approach to geometry, to measure the properties of objects that remain unchanged when subjected to continuous deformation. It has been of great interest in the field of condensed matter physics [2; 3] in recent years, mainly because of the discovery of topological insulators (TI) [4; 5; 6] and topological crystalline insulators (TCI) [7; 8; 9]. They are an entirely new class of quantum materials that act as an insulator in bulk but have conducting surface states. Those surface states have a Dirac-cone-like structure which facilitates dissipationless electronic currents [10; 11]. This allows TIs and TCIs to have plenty of novel applications, inspiring the search for new materials of such kind with varying properties that suit different applications. The surface states are protected by time-reversal symmetry in the case of TIs, and various crystal symmetries in the case of TCIs [12; 13; 14; 15; 16]. Traditionally TIs have been characterised by their non-zero \(Z_{2}\) invariant, while TCIs have been characterised by their non-zero Chern numbers [17]. Therefore, the prediction of topological materials requires tedious calculations of topological invariants or identification of topological nodes [18]. Such methods are limited to one material at a time [19]. Recently, the development of symmetry-based indicators [20] has enabled high-throughput screening for topological materials. Notably, the SymTopo [21] package was developed based on the above paradigm and has allowed high-throughput screening to search for new topologically non-trivial materials. The results of these calculations are available publicly in the Materiae [22] database through the magic of application programming interface (API). However, looking at the Materials Project [23] database, it is quickly realised that the number of unexplored materials is much greater than the ones explored for topological properties. The symmetry-based approaches only minimise the band structure calculations needed to predict topological behaviour but do not eliminate the need for such first-principles analyses. Such methods cannot hope to cover such a vast database in reasonable time and computational costs. The high resource cost calls for a predictive approach that would use previously calculated data, bringing us to Machine Learning (ML). In recent years ML has taken the world by storm, finding a plethora of applications in every field possible like image and speech recognition [24; 25; 26], weather and traffic predictions [27; 28], and medical diagnosis [29; 30]. The basic idea of ML is to take pre-existing data and develop complex models to extrapolate them, a kind of fancy curve-fitting. There are various kinds of available predictive models, such as, regression [31], support vector machines [32], decision trees [33] and random forests [34] among which artificial neural networks (ANN) [35; 36; 37] are the most complex but also result in the best predictions. However, their substantial training expense poses a significant constraint which is noteworthy considering the large number of features we are aiming to train on. In this article, an attempt has been made to address the aforementioned issue using dimensionality reduction [38] to streamline the number of features we use to train our model. Finally models are trained for predicting TIs and TCIs in various materials systems using ANN. The models are used to predict respective properties for materials taken from the Materials Project database. Electronic structure calculations are performed on the predicted materials to validate the accuracy of our models. ## II Methodology & Analysis ### Data acquisition A flowchart of the methodology followed to construct predictive models using neural networks is illustrated in Fig. 1. ML is a data-driven science, meaning a substantially sized topological property dataset is needed to construct a reliable prediction model. Topological properties can be calculated from _ab initio_ methods, viz., density functional theory (DFT) [39] taking spin-orbit coupling (SOC) into account and using various symmetry-based indicators [20]. These calculations have been performed on a lot of materials by researchers worldwide. However, the collection and access of the same can be cumbersome. Our work ahead is made more accessible by databases that archive such calculations and present them in an easily accessible format. Materials Project [23] is a growing database of materials containing structural data, electronic structures, various physical properties, etc., calculated using DFT. This database has been integrated with web-based technologies to allow the retrieval of data through API. A Python library has been developed to use the API easily and is available with the name mp-api. Materiae [22] is a database dedicated to storing topological data. It uses structures from the Materials Project and an automated algorithm, SymTopo [21], based on exhaustive mappings between symmetry representations of occupied bands and topological invariants. Their database also supports the use of APIs, and the standard Python requests library is used. As of 15th February 2023, Materials Project had 154,718 materials, while Materiae had only 28,605 members in its catalogue. This size mismatch indicates how much of the available database is yet to be explored for topological properties. ### Site distinction As the intention is to train models using elemental properties as features, similar materials related by site substitutions are grouped together, and separate models are made for individual groups. For this purpose, we do a broad classification based on their space group (SG), number of sites (\(N_{s}\)), and number of distinct sites (\(N_{d}\)) of the compounds. To ensure that models are trained upon sizeable populations, the number of TIs is checked from Materiae for each group, and ones with sufficiently large numbers are chosen, as tabulated in Table 1. For the sake of being stricter on the condition of substitutions, the sites are also checked by their Wyckoff positions, considering transformations between them. Atoms with the same Wyckoff positions across different materials are considered substitutions of each other. The different sites and considered Wyckoff schemes are tabulated in Table 2. However, binary systems (\(N_{d}=2\)) are treated differently as their sites can always be interconverted by suitable transformations. Such cases are distinguished by using electronegativities to order the respective elements. This concept is also extended to systems with more than two sites but multiple sites having the same Wyckoff positions, like the system with SG = 12;\(N_{s}=12\). Compounds with a different Wyckoff position scheme from the majority are discarded. \begin{table} \begin{tabular}{|l|l|r|} \hline **SG** & \(N_{s}\) & \begin{tabular}{c} **No. of TIs** \\ **and TCIs** \\ \end{tabular} \\ \hline 62 & 12 & 129 \\ \hline 139 & 5 & 130 \\ \hline 189 & 9 & 96 \\ \hline 225 & 2 & 64 \\ \hline 221 & 2 & 53 \\ \hline 221 & 5 & 43 \\ \hline \end{tabular} \end{table} Table 1: Number of TIs and TCIs in each chosen group of compounds. Figure 1: Flowchart of the methodology followed to construct predictive models using neural networks. ### Construction of feature space Previous attempts to predict properties of materials from elementary properties of atoms provide insight into constructing our feature space [40; 41; 42; 43]. The idea is to build an ample feature space to accommodate as many varied properties as possible and then let various ML methods choose the significant ones for our data. A total of 12 features for each atom are selected from our survey. They are electronegativity, atomic number, ionisation energy, electron affinity, atomic radius, number of valence electrons, Matynov-Batsanov electronegativity [40], pseudopotential core radius, and number of electrons in the outermost _s-_, _p-_, _d-_ and _f-_orbitals. This translates to 24 features for \(N_{d}=2\) and 36 features for \(N_{d}=3\). The problem we are addressing is a classification problem of topologically active materials. We consider those classified on Materiae as TI or TCI as "true" and others as "false", and our predictions follow the same classification. ### Normalisation and dimensionality reduction The ideal data distribution is usually a normal curve. Most predictive models have better accuracy when applied to normal distributions. A power transformer transforms the data to have zero-mean, unit-variance satisfying the condition. The Yeo-Johnson method [44] implemented in the scikit-learn [45] library is used for this step. As discussed earlier, a total of 12 features for each site are considered, i.e., 24 features for binary materials and 36 features for those with three \(N_{d}\). Such a considerably large number of features makes it hard to train a neural network in a reasonable time. Choosing to train on such a vast number of features also has the potential to confuse the final model if too many non-correlated features are provided. For this reason, a principal component analysis (PCA) [46; 47] is performed to take better projections of features that correlate better with the data and reduce the number of features to build our final model. The scikit-learn package provides different PCA methods, using different kernels like linear, polynomial, exponential, etc. Kernels help project the linearly inseparable data onto a higher dimensional space where they might be linearly separable. The implementation of PCA with kernels is called kernel principal component analysis (KPCA) [48]. The various kernels are used, and results are plotted up to three dimensions to check which method gives better visual separation of the data, which are included in the supplemental material (SM). It is observed that using KPCA with radial basis function kernel results in the best visual separation. The data is transformed into its principal components in this step, which is used for further studies. ### Dimension Pruning While PCA can construct the best dimensions for use, it provides no useful information for determining the number of dimensions to be chosen. For this purpose, a decision tree-based model [33] is trained on both the original data and the data transformed from PCA but with varying dimensions. The reason to choose a decision tree-based model is their comparatively faster training time while maintaining the ability to build complex models. Adaboost [49; 50] is a good choice for our binary classification problem, with the advantage of having relatively fewer hyperparameters as well as lesser chances of overfitting. The PCA-transformed dataset of each system is used to train Adaboost models, starting from a single dimension to twenty dimensions. Using the same data to learn the parameters of a prediction model and testing the model is usually considered a mistake. This is because we can not be confident about the predictions on yet unknown data. Hence, some of the available training data is actually held out of the training set to make a test set to be used for validation. A common practice is to do a \(k\)-fold cross-validation [51; 52] in which the data is equally split into \(k\) parts. A model is trained on \(k-1\) parts, and remaining 1 part is used to score the model. \(k\) different models with respective scores are obtained by changing the part of the data not used in training. Finally, the model with the highest score is chosen. Five-fold cross-validation is used at each step to help select better models. The accuracy of each model vs. the number of dimensions being considered is plotted, as in Fig. 2. The two hyperparameters, viz., learning rate and the maximum number of trees, are tuned such that further increasing them doesn't change the nature of the plot mentioned above. This is observed at a low learning rate of 0.01 and maximum estimators of 2500 and above. The same model is also trained on the raw data to compare accuracy and training times. Fig. 3 shows the respective plots and a suitable number of dimensions (\(dim\)) are chosen to achieve the best trade-off of accuracy vs. computational times, as given in Table 3. It is observed that about 3-6 dimensions are suffi \begin{table} \begin{tabular}{|l|l|l|l|l|l|} \hline \multirow{2}{*}{**SG**} & \multirow{2}{*}{\(\mathbf{N_{s}}\)} & \multirow{2}{*}{\(\mathbf{N_{d}}\)} & \multicolumn{3}{l|}{**Wyckoff Positions**} \\ \cline{4-6} & & & **Site 1** & **Site 2** & **Site 3** \\ \hline 62 & 12 & 3 & c & c & c \\ \hline 139 & 5 & 3 & \begin{tabular}{l} a \\ b \\ \end{tabular} & \begin{tabular}{l} d \\ \end{tabular} & \begin{tabular}{l} e \\ e \\ \end{tabular} \\ \hline 189 & 9 & 3 & \begin{tabular}{l} a \\ b \\ \end{tabular} & \begin{tabular}{l} f \\ g \\ \end{tabular} & \begin{tabular}{l} g \\ f \\ \end{tabular} \\ \hline 221 & 5 & 3 & \begin{tabular}{l} a \\ b \\ \end{tabular} & \begin{tabular}{l} b \\ a \\ \end{tabular} & \begin{tabular}{l} c \\ d \\ \end{tabular} \\ \hline \end{tabular} \end{table} Table 2: Wyckoff indices being considered as direct substitutions in lattice for materials with 3 distinct sites per unit cell. cient to achieve almost the accuracy of the raw data in most cases. The time comparison shows a linear increase with \(dim\). Choosing \(dim=5\) improves training time by about two-fold for systems with \(N_{d}=2\), i.e., 24 features, and nearly three-fold for systems with \(N_{d}=3\), i.e., 36 features. ### Neural Networks for Prediction Neural networks are chosen as the final model we train to output predictions of topological materials due to their generally high complexity and accuracy. Their primary disadvantage is the model training cost, but once trained, the predictions can be churned out quickly and en masse if needed. A Multi-Layer Perceptron (MLP) [53; 54] classifier is used, as implemented in the scikit-learn library. The default setting of using Adam solver [55] is not changed. In the case of MLP, unlike AdaBoost, a lot of hyperparameters are to be considered, i.e., learning rate (\(lr\)), strength of the L2 regularization (\(\alpha\)), number of hidden layers (\(hl_{d}\)), sizes of hidden layers (\(hl_{s}\)), and maximum number of iterations (\(n_{i}\)) to run for. These have been extensively tested across multiple values using the Optuna framework [56] and the best ones were chosen for each dataset individually. A five-fold cross-validation method is also implemented within the scoring model to help select a better predictive model. It is in this step that our earlier method of reducing the overall time taken to train a neural network becomes significant. The computational cost of extensively testing models across a wide range of hyperparameters along with cross-validation would be very high. The achieved Figure 3: Comparison of training times of adaboost on various number of dimensions. Only SG = 221 is shown, but other datasets show the same patterns. Figure 2: Comparison of accuracies of training models using raw data and increasing number of dimensions from PCA. Red dotted lines represent the raw data while the cyan lines represent the respective dimensions, as given on the x-axis. The accuracies are given on the y-axis. validation scores and corresponding hyperparameters are listed in Table 3. To validate our predictions, we have adopted the same method as the Materiae website, which uses the SymTopo package. It is to be kept in mind that their theory only works for nonmagnetic materials. Additionally it performs a hard-check on number of electrons in the DFT calculation, only accepting an even number of electrons. This property is masked by our methodology due to the initial transformation and PCA of our original features. Materials that are not included in Materiae are picked up from Materials Project and checked for the former two conditions. Features are generated for the ones that pass. Previously trained power transformer, PCA transformation and MLP prediction are applied in that order. Positive outcomes from our predictions are then tested using SymTopo for further validation and reported. The steps followed for predictions are represented as a flowchart in Fig. 4. SymTopo is used to prepare input files for Vienna ab initio simulation package (VASP) [57; 58; 59], which performs the actual electronic structure calculations according to DFT with projector augmented wave (PAW) potentials [60]. Perdew-Burke-Ernzerhof (PBE) [61] exchange correlation function is used. The results from SymTopo are discussed in the section below. ## III Summary of predictions The count of different kind of materials as output from SymTopo are listed in Table 4. The number in "Total" column only includes candidates with an even number of valence electrons, nonmagnetic nature, i.e., very low magnetic moment and predicted as "true" for non-trivial topological behaviour by our models. The correct predictions made by our model are those which SymTopo also computes as being of the TI or TCI category. The "Semimetal" column includes both high-symmetry point semimetals and high-symmetry line semimetals. Both systems are topologically non-trivial with topological nodes at high-symmetry points or lines respectively [7 ]. However, they are of less interest to us as they cannot be used as TI or TCI due to their degeneracy at high-symmetry points or lines, and were consequently marked as false in our training data. However, due to their non-trivial topological nature, a lot of predicted TIs from our model are found to be these semi-metals instead but they cannot be strictly classified as false positive results. The "Trivial" column consists of topologically trivial insulators and are strictly false positives in the case of our predictions. Table 4 illustrates that certain material systems are ripe for exploration of topological properties, particularly the first three that have been chosen. The predictions of systems with SG = 12;\(N_{s}=12\), SG = 139;\(N_{s}=5\) and SG = 189;\(N_{s}=9\) have 60%, 50% and 63% correctly predicted TI and TCI, respectively, along with a very low percentage of trivial insulators. In those cases it is clear that our method has yielded a high percentage of topologically active materials with no electronic structure calculations of our own, proving the potential of similar data-based studies for exploration of topological properties. The results are not that spectacular for the other three systems, which means our models are not able to capture the essence from their data, or maybe those systems do not have too much left to explore for non-trivial topology. As more data becomes available, the method can be extended to more systems. Material-wise detailed catalogue of classification has been made available in the SM. Figure 4: The flowchart of the methodology followed for predictions from our models. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline **SG/\(N_{S}\)** & _dim_ & \(n_{i}\) & _lr_ & \(\alpha\) & _hl\({}_{s}\)_ & _hl\({}_{d}\)_ \\ \hline 62/12 & 6 & 2000 & 1.5\(\times 10^{-4}\) & 1.347\(\times 10^{-4}\) & 135 & 3 \\ \hline 139/5 & 4 & 1108 & 1.86\(\times 10^{-9}\) & 2.3851\(\times 10^{-5}\) & 93 & 9 \\ \hline 189/9 & 4 & 2234 & 3.19\(\times 10^{-3}\) & 2.486\(\times 10^{-5}\) & 146 & 9 \\ \hline 221/2 & 6 & 3684 & 1.48\(\times 10^{-4}\) & 1.197\(\times 10^{-5}\) & 73 & 7 \\ \hline 221/5 & 3 & 2800 & 6\(\times 10^{-3}\) & 3.5\(\times 10^{-5}\) & 75 & 10 \\ \hline 225/2 & 3 & 3999 & 6.99\(\times 10^{-3}\) & 3.447\(\times 10^{-5}\) & 79 & 10 \\ \hline \end{tabular} \end{table} Table 3: Summary of hyperparameters used in training our MLPs. It lists the number of dimensions kept from PCA and all the parameters passed as arguments while training our final model ## IV Conclusion To conclude, we have constructed different MLP models with various accuracies calculated over a five-fold cross-validation method. In the process, we have demonstrated the viability of using dimensionality reduction to reduce the number of features used to train the neural networks, significantly reducing the training times. We have also proposed a method using decision tree-based models to help select the number of dimensions that should be chosen from the PCA analysis. All of this now allows us to make predictions about topological properties of new proposed materials related to our chosen systems by site substitutions much faster and to scan a much larger materials space with very little computation. The predictions have been verified using the algorithm used to generate the training data in the first place. We expect the method to extend to more systems as more data becomes available. Furthermore, the use of dimensionality reduction to reduce training times appears to be an independent property that might be extensible to more ML applications beyond the particular application presented in this paper. ## V Acknowledgement R.M. acknowledges CSIR, India, for the junior research fellowship [Grant No. 09/0086(12865)/2021-EMR-I]. A.P. acknowledges IIT Delhi for the senior research fellowship. S.S. acknowledges CSIR, India, for the senior research fellowship [grant no. 09/086(1432)/2019-EMR-I]. S.B. acknowledges financial support from SERB under a core research grant (Grant No. CRG/2019/000647) to set up his high-performance computing (HPC) facility "Veena" at IIT Delhi for computational resources.
2303.16636
Operational Neural Networks for Parameter-Efficient Hyperspectral Single-Image Super-Resolution
Hyperspectral Imaging is a crucial tool in remote sensing which captures far more spectral information than standard color images. However, the increase in spectral information comes at the cost of spatial resolution. Super-resolution is a popular technique where the goal is to generate a high-resolution version of a given low-resolution input. The majority of modern super-resolution approaches use convolutional neural networks. However, convolution itself is a linear operation and the networks rely on the non-linear activation functions after each layer to provide the necessary non-linearity to learn the complex underlying function. This means that convolutional neural networks tend to be very deep to achieve the desired results. Recently, self-organized operational neural networks have been proposed that aim to overcome this limitation by replacing the convolutional filters with learnable non-linear functions through the use of MacLaurin series expansions. This work focuses on extending the convolutional filters of a popular super-resolution model to more powerful operational filters to enhance the model performance on hyperspectral images. We also investigate the effects that residual connections and different normalization types have on this type of enhanced network. Despite having fewer parameters than their convolutional network equivalents, our results show that operational neural networks achieve superior super-resolution performance on small hyperspectral image datasets. Our code is made available on Github: https://github.com/aulrichsen/SRONN.
Alexander Ulrichsen, Paul Murray, Stephen Marshall, Moncef Gabbouj, Serkan Kiranyaz, Mehmet Yamac, Nour Aburaed
2023-03-29T12:48:51Z
http://arxiv.org/abs/2303.16636v2
# Operational Neural Networks for Efficient Hyperspectral Single-Image Super-Resolution ###### Abstract Hyperspectral Imaging is a crucial tool in remote sensing which captures far more spectral information than standard color images. However, the increase in spectral information comes at the cost of spatial resolution. Super-resolution is a popular technique where the goal is to generate a high-resolution version of a given low-resolution input. The majority of modern super-resolution approaches use convolutional neural networks. However, convolution itself is a linear operation and the networks rely on the non-linear activation functions after each layer to provide the necessary non-linearity to learn the complex underlying function. This means that convolutional neural networks tend to be very deep to achieve the desired results. Recently, self-organized operational neural networks have been proposed that aim to overcome this limitation by replacing the convolutional filters with learnable non-linear functions through the use of MacLaurin series expansions. This work focuses on extending the convolutional filters of a popular super-resolution model to more powerful operational filters to enhance the model performance on hyperspectral images. We also investigate the effects that residual connections and different normalization types have on this type of enhanced network. Despite having fewer parameters than their convolutional network equivalents, our results show that operational neural networks achieve superior super-resolution performance on small hyperspectral image datasets. Hyperspectral Imaging, Super-Resolution, Operational Neural Networks ## I Introduction Hyperspectral imaging is a key component in remote sensing applications as the additional spectral information offers insights into the materials within the image that standard color images cannot provide. However, such increased spectral resolution comes at the cost of spatial resolution due to the sensors' limitations. Automated image processing tasks such as image segmentation, object detection and classification can improve the efficiency of remote sensing systems. However, the reduction in spatial resolution can be detrimental to their performance. To recover the lost spatial resolution and help improve the performance of image processing tasks on the resulting hyperspectral image (HSI), one option is to apply single image super-resolution (SISR) to enhance the spatial resolution of the given low-resolution hyperspectral image. Most modern super-resolution (SR) approaches use convolutional neural networks (CNNs) to produce an image-to-image mapping operator which converts the input low-resolution image to a high-resolution image [1, 2, 3, 4]. These operators are of a complex non-linear nature and part of the reason that CNNs have had so much success in this field is due to their capacity to learn complex non-linear operators. However, the sole non-linear elements of a CNN come from the activation functions after each layer, meaning that CNNs often require many layers to have the necessary non-linear capacity and diversity to learn the desired operator. Recently, operational neural networks (ONNs) [5, 6] and their new variants, self-organised operational neural networks (Self-ONNs) [7], have been proposed to overcome this limitation by using the generative neuron model that can customize the optimal non-linear function during training for each kernel element. To accomplish this, each kernel element is extended with MacLaurin series expansions and the terms of the series are made learnable. This means that each kernel element can learn to approximate any non-linear function and thus similar theoretical non-linear capacity of a deep CNN can be achieved in a much shallower Self-ONN which is more computationally efficient. In this paper, we take the popular SR network, SRCNN [3], and extend it for use on hyperspectral images. We also make a Self-ONN equivalent model by replacing the convolutional layers with operational layers. Furthermore, we make a Self-ONN version with a reduced number of filters to demonstrate the non-linear capacity of operational layers over convolutional layers. We train our models on the publicly available Pavia University, Cuprite, Salinas, and Urban datasets [8, 9] and show that Self-ONNs can provide an HSI SR performance improvement of over 0.5 dB PSNR even when it has fewer parameters than a CNN. Furthermore, this study investigates the effects residual connections and various normalization types have on Self-ONN performance, as, to the best of our knowledge, this has not been previously investigated. The contributions of this work can be summarised as follows: * We propose novel Self-ONN-based efficient neural networks that have structures similar to that of the well known lightweight CNN SR network, SRCNN [3]. * We present our novel findings on the effect residual connections and various normalization types have on hyperspectral image super-resolution performance in our Self-ONN models. ## II Related Work ### _Hyperspectral Imaging_ Hyperspectral imaging is a valuable tool in remote sensing applications such as material classification, mineral exploration, environmental monitoring, and more [10]. The effectiveness of these tasks is dependent on the resolution of the captured image. However, due to sensor limitations, it is difficult to obtain a high-quality hyperspectral image (HSI) with both high spectral and spatial resolution [11] and thus the increased spectral resolution generally comes at the cost of decreased spatial resolution [12]. It is therefore desirable to be able to recover the lost spatial resolution to improve the performance of post-processing tasks. ### _Super-Resolution_ Most modern approaches to super-resolution use convolutional neural networks (CNNs) in either a supervised or unsupervised manner [13, 14, 15, 16]. Supervised training involves training a model on a dataset consisting of low-resolution and high-resolution image pairs. One of the first papers to adopt this approach was [3] where they proposed their CNN model named SRCNN for the task of single image super-resolution. SRCNN is a fairly shallow CNN by modern standards, consisting of only 3 layers, so the authors of [17] proposed a much deeper CNN to perform supervised SISR. The deeper network provides more learning capacity but is also more difficult to train due to the vanishing gradient problem. To overcome this issue, they proposed a residual connection which sums the input of the model directly to the output so that instead of learning the direct input-to-output image mapping, the model learns the residual between the input and output which improved results and greatly decreased training times. Since then, many other deep CNN models have been proposed for supervised single image super-resolution [18, 19, 20, 13, 21]. However, the main challenge of this approach is acquiring the dataset. Ideally, perfectly aligned images would be captured with a low-resolution and a high-resolution sensor, but this is impractical to perform in many situations. What is more commonly done is a dataset of high-resolution images is acquired and the low-resolution image pairs are then synthetically generated by blurring and downsampling the high-resolution images and then adding noise. To overcome this limitation, unsupervised methods using Generative Adversarial Networks (GANs) [22] have been proposed which utilize datasets of unpaired real high-resolution and low-resolution images through the use of generator and discriminator models. The generator produces high-resolution versions of the low-resolution images and the discriminator aims to distinguish between the true high-resolution images and the generated high-resolution images. Over time, the generator learns to produce realistic high-resolution outputs of the input low-resolution images which match the distribution of the high-resolution image dataset. Thus, the model is more likely to learn the true low-resolution to high-resolution image mapping function. Many researchers have achieved impressive results using this approach [1, 23, 24, 15, 25, 26]. However, the unsupervised nature of this approach means that it is inherently more difficult to train as the generator learns from feedback provided by the discriminator and the discriminator has no prior knowledge of the objective. In addition, it is also challenging to measure the performance of a GAN objectively as typical image quality metrics such as peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM) [27] require a target image to be evaluated. To overcome these problems it typically requires a lot of training data to produce a realistic GAN [28] which is not always available, particularly in the case of hyperspectral imagery. Attempts have been made to improve upon human-perceived super-resolution quality. [29] introduces a perceptual loss function generated through a fixed loss network to create visually pleasing results. Their method produces visually pleasing results but at the cost of PSNR and SSIM, indicating that their per-pixel accuracy is lower. [15] introduces a perceptual loss function to train a GAN, which again focuses on learning mappings that are perceptually pleasing to humans, rather than pixel-to-pixel accuracy. These approaches improve how pleasing super-resolution outputs may be to a human observer, but do not necessarily provide any performance improvement for post-processing tasks to be done on the resulting images. It has been shown that super-resolution performance can be improved by utilising multiple images captured in quick succession [30]. However, this approach is impractical when it comes to HSIs due to the slow acquisition times. Data fusion techniques can be applied to HSIs [31, 32]. However, these approaches rely on the availability of a high-resolution multispectral image of the same scene. Transformers [33] are gaining popularity in the vision community and some researchers have utilized them for super-resolution [34]. However, this approach suffers the same problem as the unsupervised GAN methods in that that they require very large amounts of data to be trained. Furthermore, the use of these techniques is also known to be computationally expensive during the inference process. Given the limited availability of training data makes the use of transformers, modern deep GANs, and CNNs difficult to apply to HSI SR problems. Furthermore, we generally aim to improve the quality of the hyperspectral data before inference tasks, which means that an efficient SR network that can operate in real-time is preferred. The large amount of data to be processed in hyperspectral imaging presents a challenge to using deep networks, so we propose a highly efficient SR neural network structure based on a new paradigm, self-operational neural filtering. ### _Operational Neural Networks_ Recent advances in deep learning have resulted in CNNs dominating many computer vision fields, including super-resolution. Part of the reason for their success is their ability to learn complex non-linear operators. However, convolution itself is a linear operation and the non-linear components of the networks are solely provided by the activation functions used after each convolutional layer in the network. This means that CNNs often have to be very deep in order to have the necessary non-linear capacity and diversity to learn the complex function of the learning problem. Recently, Operational Neural Networks (ONNs) [5, 6] were proposed to address this issue by incorporating non-linear nodal and pooling functions that replace the sole convolution operation with any non-linear operator, which adds significantly more non-linear components to the network than a traditional CNN. However, these additional non-linear operations are hard coded and thus cannot be changed during training. This means that the functions need to be searched for, which is computationally expensive, and the search space is limited to the function set, which may not contain the optimal function(s). The authors of [5] then addressed these limitations by proposing self-organized operational neural networks (Self-ONNs) [7] which aim to make the linear filters of a standard CNN non-linear through the use of MacLaurin series expansions, rather than applying hard-coded functions. Such non-linear filters for each kernel element are learnable during training, and thus, eliminate the need for an exhaustive search to find the optimal functions. Furthermore, any function can theoretically be approximated using MacLaurin series expansions, which means that a Self-ONN is not limited to a specified function set, allowing for an enhanced non-linear search space. These improvements mean that Self-ONNs are far more computationally efficient than their standard ONN counterparts, with greater theoretical non-linear capacity than both their ONN and CNN counterparts. This additional complexity comes at the cost of each filter requiring more parameters. However, the network size of a Self-ONN can be much smaller than a CNN to have the same or increased theoretical non-linear capacity, allowing for the overall model to have fewer parameters than a CNN despite each individual filter containing more parameters. In many applications [35, 36, 37, 38, 39, 40] Self-ONNs outperformed the deeper and more complex CNNs whilst achieving an elegant computational efficiency. ## III Methodology We take the super-resolution model SRCNN [3] and modify it for use on hyperspectral images by extending the number of input and output channels of the model from 3 (for RGB images) to the required number for the relevant HSI depending on the number of wavelength bands it contains. SRCNN, shown in Figure 1, is a relatively compact model consisting of 3 convolutional layers followed by ReLU activation functions, except for the output layer, where no activation function is used. We propose a novel Self-ONN model, SRONN, that shares the same configuration as SRCNN as shown in Figure 2. A key aspect of Self-ONNs is that data passed between layers must be bounded between -1 and 1 in order to prevent exponentially large values due to the non-linear nature of the model. We, therefore, use _tanh_ activation functions after the first and second operational layers in our SRONN model instead of the ReLU activation functions of SRCNN. Self-ONNs gain their additional non-linear complexity through the use of MacLaurin series expansions: \[f(x)=\sum_{n=0}^{\infty}\frac{f^{(n)}(0)}{n!}x^{n} \tag{1}\] In practice, the 0th term in the expansion is the bias. Therefore, the 0th term can be disregarded from the filter approximation. The order of the polynomial should be finite in practice so the number of terms is supplied to the network by a parameter \(Q\). This makes the expansion for an ONN as follows: \[f(x)=\sum_{n=1}^{Q}\frac{f^{(n)}(0)}{n!}x^{n} \tag{2}\] Note that when the \(Q\) value is 1, it is the exact equivalent of a standard convolutional layer. Higher \(Q\) values yield more accurate function approximations but at the cost of additional parameters as the \(Q\) value directly equates to the multiplication in parameters over a standard convolutional filter. The number of parameters in the convolutional layers of a CNN can be calculated using the following equation: \[\text{\# parameters}=\sum_{l=0}^{l-1}(n_{l}\times m_{l}\times f_{l}+1)\times f _{l+1} \tag{3}\] where \(l\) is the number of layers, \(n_{l}\), \(m_{l}\) is the number of rows and columns in the convolutional filters at layer \(l\), \(f\) Fig. 1: SRCNN model representation consisting of 3 convolutional layers with filter sizes f1 x f1, f2 x f2, and f3 x f3. Fig. 2: SRONN model representation consisting of 3 convolutional layers with filter sizes (f1 x f1 x Q), (f2 x f2 x Q), and (f3 x f3 x Q). the number of filters and the constant 1 accounts for the bias for each filter. Note, that on the first layer, i.e. \(l=0\), the number of filters from the previous layer (\(l-1\)) is given by the number of channels of the input image. To compute the number of parameters of a Self-ONN, we simply multiply this by \(Q\): \[\text{\# parameters}=\sum_{l=0}^{l-1}(n_{l}\times m_{l}\times f_{l}\times Q+1) \times f_{l+1} \tag{4}\] We select a \(Q\) value of 3 for all our experiments as we found this to be a good balance between sufficient approximation accuracy and the number of parameters. However, this means that each of our SRONN models will have approximately three times as many parameters as the SRCNN models. For a fair comparison, we also propose another Self-ONN model again with the same number of layers as SRCNN, but four times fewer filters per layer. As a result, this model has between 26.5% and 28.2% fewer parameters than SRCNN depending on the number of input and output channels required by the target dataset. We name this model small SRONN, or sSRONN. ### _Normalization and Residual Connections_ Due to their recent proposal, techniques commonly applied to CNNs to improve results have been studied little on ONNs. We study the effects of incorporating various normalization layer types into our ONN models after each Tanh activation function including L1, L2, instance [41], and batch [42] normalization. We also study the effects of adding a residual connection to connect the output of the model directly to the input of the model so that the model learns the residual rather than the direct mapping as performed in [20]. To the best of our knowledge, this is the first work to study the effects of these techniques on Self-ONNs. The proposed Self-ONN model is illustrated in Figure 3. ## IV Training Details Each model was trained for 50000 epochs to guarantee network convergence, and the weights from the epoch which produced the highest SSIM validation score were used for testing. We use the Adam optimizer [43] with default parameters except for the learning rate. Each model was initially trained with a learning rate of \(10^{-4}\) which was decreased by a factor of 10 at epochs 5000 and 40000. Two following runs were then completed where the starting learning rate and the epoch milestones - where the learning rate was decreased by a factor of 10 - were manually adjusted in an attempt to improve the performance. We use mean squared error as our loss function. We initialize our models' weights with a normal distribution with a gain of 0.02. All training LR tiles are fed to the model in a single batch on each epoch. ### _Datasets_ We evaluate our models on four different HSI datasets: Pavia University; Salinas; Cuprite; Urban. Details for each dataset [9, 8] can be seen in Table I. We use the standard approach to generating a low-resolution image pair from a given high-resolution target image by using Eq. (5): \[I_{LR}=(I_{HR}*k)\downarrow_{s}+n \tag{5}\] where \(k\in R^{2}\) is a 2D degradation kernel, * is a spatial convolution, \(\downarrow_{s}\) is a decimation operation with a stride s, and n is a noise term. We use Gaussian blur with a sigma value of 0.8943 for k, 2x bicubic interpolation for \(\downarrow_{s}\). We do not add any noise so the parameter n is ignored. Each generated LR tile was then bilinearly interpolated back up to the size of the original tile so the model could perform super-resolution by recovering the information at the desired output resolution. The model would then be trained with the LR tile as input and the original HR tile as the target. Each dataset was preprocessed with min-max normalization and then divided into 64x64 pixel tiles, maintaining the entire wavelength spectrum. We utilize 70% of the tiles for training, 15% for validation and reserve 15% for testing. ## V Results We train each model combination 3 times, i.e. the combination of the base model, the normalization type used, and whether or not there was a residual connection. The first iteration uses the default parameters of \(10^{-4}\) learning rate, and the learning rate is decreased by a factor of 10 on epochs 5000 and 40000. For the remaining two iterations, the parameters were manually adjusted in an attempt to improve the performance from the default first iteration. For all experiments, the entire training dataset was forward propagated through the model at once so there was no need to adjust the batch size. We only apply normalization to the Self-ONN models, since normalization has been widely studied on CNNs. Therefore, Fig. 3: General model architecture. C represents the number of channels in the hyperspectral image. Values in brackets represent the number of filters in the compact sSRONN model. SRCNN and SRONN variants have Cx128, 128x64, and 64xC filters in each respective layer. sSRONN variant has Cx32, 32x16, and 16xC filters in each respective layer. The normalization type depends on the experiment and in some experiments, there is no normalization, in which case the normalization layers are skipped. The residual connection is also removed in experiments where it is not applied. we first compare the SRCNN models against the SRONN and sSRONN models without normalization for a fair comparison. The results without a residual connection can be seen in Table II with example outputs from the Pavia University dataset in Figure 4 and the results without a residual connection can be seen in Table III with example outputs from the Pavia University dataset in Figure 7. For the three training iterations of each model on each dataset, we report only the results from the best iteration. ### _Normalization_ We present the results from adding various normalization types to the Self-ONN models in separate tables for each dataset. Results for the Cuprite dataset are shown in Table IV, Pavia University in Table V, Salinas in Table VI and Urban in Table VII. ## VI Discussion The results from Table II show that the SRONN models tend to offer a slight improvement over the SRCNN model, but this is not always the case. For example, on the Salinas dataset, the SRCNN model outperformed the SRONN models across all metrics. We hypothesise that this is due to the Self-ONN models having a more complex search space to navigate and optimise, and therefore have more difficulty converging than the simpler SRCNN model. Fig. 4: Output of Models with no Residual Connection or Normalization on slice 80 of the Pavia University dataset. The original HSI is shown on the left. LR, predictions, original and spatial absolute difference between prediction and HR shown on the right. ### _Effect of Residual Connection_ The results from Table III show that adding a residual connection provides significant improvement to both Self-ONN models, resulting in both the SRONN and sSRONN models outperforming the SRCNN models across all metrics on all datasets. The addition of a residual connection improved all metrics across all datasets for both sSRONN and SRONN except for PSNR on the Urban dataset for the SRONN model where a slight decrease was observed. The residual connection has a lesser impact on the results of SRCNN, only offering improvement in some cases, which is likely due to the model not being complex enough to see any consistent performance improvement from a residual connection. The improvement seen in the Self-ONN models when a residual connection is added supports our convergence hypothesis. It could also be indicative that Self-ONNs may suffer more from vanishing gradients than CNNs. Interestingly, the sSRONN model generally saw greater performance improvements from the addition of a residual connection than the SRONN model, which is counterintuitive as the sSRONN optimization search space is significantly smaller than the search space of the SRONN model. One explanation for this could be that the sSRONN model might be slightly under-parameterized for direct image-to-image mapping. However, it may have sufficient parameters to learn the residual, resulting in a bigger performance improvement when the residual connection is Fig. 5: Output of Models with a Residual Connection on slice 80 of the Pavia University dataset. The original HSI is shown on the left. LR, predictions, original and spatial absolute difference between prediction and HR shown on the right. added to the model. The larger SRONN model, which may be well-parameterised for image-to-image mapping but slightly over-parameterised for residual learning, does not see as much of a performance improvement as the smaller sSRONN model. Since both SRONN and sSRONN outperform SRCNN, this demonstrates the power of the non-linear filters over the standard linear convolutional filters. The non-linear filters provide the operational layer with an enhanced ability to produce sharper edges and thus produce sharper contrast between pixels resulting in a more detailed output image, which is evident in the resulting images shown in Figure 7. ### _Effects of Normalization_ Our results in Tables IV, V, VI, and VII show the effects of incorporating normalization layers into our SRONN and sSRONN models are largely varied and highly dataset dependent. It appears that normalization has a greater impact on the datasets with larger spatial dimensions. We found L2 normalization to be the most effective, providing a slight performance boost to the SRONN model across all metrics on the Cuprite, Pavia University and Urban datasets while boosting the SAM on the Salinas dataset. For the sSRONN model, the performance improvement from adding L2 normalization is less significant, providing only a performance boost to SSIM and SAM on the Cuprite dataset, PSNR on the Pavia University dataset and SSIM on the Urban dataset. No performance improvement was provided by using L2 normalization over no normalization on the Salinas dataset. Our results show that normalization is generally more effective when utilized in conjunction with a residual connection. This is likely due to the fact the normalization layers will normalize the data around a zero mean which makes it more difficult for the models without a residual connection to map the zero mean feature maps to the true mean of the output. However, when a residual connection is introduced, the model learns the residual between the input and the target, which should have a mean near zero. Therefore, normalization may offer a greater benefit in this scenario as it assists the model in transforming the data to the target mean, rather than moving it away from the target mean. Interestingly, we found instance normalization to be especially detrimental to all results. This could be because instance normalization normalizes each channel individually which may have an adverse effect on the channel dependencies. ## VII Conclusion We show that Self-ONNs outperform equivalent well known CNNs in the task of HSI SR, even when the Self-ONN models Fig. 6: True Super-Resolution Output of Models with a Residual Connection on slice 80 of the Pavia University dataset. The original HSI is shown on the left. Test tiles bilinearly interpolated up to 2x their original size, and super-resolution results on interpolated tiles are shown on the right. have a lower number of parameters than the CNNs. The Self-ONN results produced sharper images and contained more detail which is likely a direct result of the enhanced non-linear filters. We found that adding a residual connection to our SRONN and sSRONN models provided a significant performance improvement and greatly increased convergence times. We hypothesize that Self-ONNs suffer more from the vanishing gradient problem than CNNs due to their more complex search spaces and thus the residual connection helps mitigate this issue, even in relatively shallow models. We examined the effects of adding a residual connection and various normalization layers to our ONN models. Our results show that L2 normalization layers in ONNs can offer a moderate performance improvement when used in conjunction with a residual connection, but the benefit of normalization appears to be highly dependent on the dataset. We show that the superior non-linear capabilities of ONNs compared to CNNs allow for sharper and more detailed HSI SR results. This indicates that ONNs may a better solution than the standard CNN in such image-to-image mapping tasks. ## VIII Acknowledgments This work was supported by the Engineering and Physical Sciences Research Council [grant number EP/T517938/1] and Peacock Technology Limited.
2303.06425
Improving the Robustness of Deep Convolutional Neural Networks Through Feature Learning
Deep convolutional neural network (DCNN for short) models are vulnerable to examples with small perturbations. Adversarial training (AT for short) is a widely used approach to enhance the robustness of DCNN models by data augmentation. In AT, the DCNN models are trained with clean examples and adversarial examples (AE for short) which are generated using a specific attack method, aiming to gain ability to defend themselves when facing the unseen AEs. However, in practice, the trained DCNN models are often fooled by the AEs generated by the novel attack methods. This naturally raises a question: can a DCNN model learn certain features which are insensitive to small perturbations, and further defend itself no matter what attack methods are presented. To answer this question, this paper makes a beginning effort by proposing a shallow binary feature module (SBFM for short), which can be integrated into any popular backbone. The SBFM includes two types of layers, i.e., Sobel layer and threshold layer. In Sobel layer, there are four parallel feature maps which represent horizontal, vertical, and diagonal edge features, respectively. And in threshold layer, it turns the edge features learnt by Sobel layer to the binary features, which then are feeded into the fully connected layers for classification with the features learnt by the backbone. We integrate SBFM into VGG16 and ResNet34, respectively, and conduct experiments on multiple datasets. Experimental results demonstrate, under FGSM attack with $\epsilon=8/255$, the SBFM integrated models can achieve averagely 35\% higher accuracy than the original ones, and in CIFAR-10 and TinyImageNet datasets, the SBFM integrated models can achieve averagely 75\% classification accuracy. The work in this paper shows it is promising to enhance the robustness of DCNN models through feature learning.
Jin Ding, Jie-Chao Zhao, Yong-Zhi Sun, Ping Tan, Ji-En Ma, You-Tong Fang
2023-03-11T15:22:29Z
http://arxiv.org/abs/2303.06425v1
# Improving the Robustness of Deep Convolutional Neural Networks Through Feature Learning ###### Abstract Deep convolutional neural network (DCNN for short) models are vulnerable to examples with small perturbations. Adversarial training (AT for short) is a widely used approach to enhance the robustness of DCNN models by data augmentation. In AT, the DCNN models are trained with clean examples and adversarial examples (AE for short) which are generated using a specific attack method, aiming to gain ability to defend themselves when facing the unseen AEs. However, in practice, the trained DCNN models are often fooled by the AEs generated by the novel attack methods. This naturally raises a question: can a DCNN model learn certain features which are insensitive to small perturbations, and further defend itself no matter what attack methods are presented. To answer this question, this paper makes a beginning effort by proposing a shallow binary feature module (SBFM for short), which can be integrated into any popular backbone. The SBFM includes two types of layers, i.e., Sobel layer and threshold layer. In Sobel layer, there are four parallel feature maps which represent horizontal, vertical, and diagonal edge features, respectively. And in threshold layer, it turns the edge features learnt by Sobel layer to the binary features, which then are feceded into the fully connected layers for classification with the features learnt by the backbone. We integrate SBFM into VGG16 and ResNet34, respectively, and conduct experiments on multiple datasets. Experimental results demonstrate, under FGSM attack with \(\epsilon=8/255\), the SBFM integrated models can achieve averagely 35% higher accuracy than the original ones, and in CIFAR-10 and TinyImageNet datasets, the SBFM integrated models can achieve averagely 75% classification accuracy. The work in this paper shows it is promising to enhance the robustness of DCNN models through feature learning. binary features, robustness of deep convolutional neural network, Sobel layer, threshold layer ## I Introduction It is well known that deep convolutional neural network (DCNN for short) models can be fooled by examples with small perturbations [1, 2], which results in serious consequences when they are applied into the safety-critical applications, e.g., autonomous driving, airport security, and industry automation. Therefore, it is of great significance to enhance the robustness of DCNN models. Adversarial training (AT for short) is a widely used approach to enhance the robustness of DCNN models by data augmentation [3, 4, 5, 6]. In each training step, one specific attack method is employed to generate adversarial examples (AE for short), which together with the clean examples are input into DCNN models, expecting to make the trained DCNN models defend themselves on unseen AEs. However, the trained models can still be fooled by AEs figured out by the novel attack methods. This naturally raises a question: can a DCNN model learn certain features which are insensitive to small perturbations, and further defend itself no matter what attack methods are presented. Fig. 1 shows a clean example and its adversarial counterpart. With the presence of small noise, the DCNN models amplify the noise through the deep structures, which is reduced into the final texture features, and probably output the wrong classification results. However, comparing Fig. 1(a) and Fig. 1(b), it is clear to see the shape of the cat is legible in both images, which prompts us the shape-like features may be helpful in making a correct decision with the presence of the small noise. In this regard, Li _et al._[7] proposed a part-based recognition model with human prior knowledge, which first segments the objects, scores the object parts, and outputs the classification class based on the scores. Sitawarin _et al._[8] also proposed a part-based classification model, which combines part segmentation with a classifier. Both works take parts of objects into consideration to enhance adversarial robustness of DCNN models. The results are outstanding, but they require the detailed segmentation annotations and cannot be integrated into popular recognition or detection architectures. Fig. 2 shows the thresholded edge images of Fig. 1. We can see both binary images are almost the same, which indicates binary features can be taken as shape-like features to enhance the robustness of DCNN models. In this paper, a shallow binary feature module (SBFM for short) is proposed to extract Fig. 1: **Clean example and adversarial example.** the binary features of the input images, which stacks two types of layers-Sobel layer and threshold layer. In Sobel layer, there are four parallel feature maps which represent horizontal, vertical, and diagonal edge features respectively [9, 10]. And in threshold layer, it turns the learnt edge features from Sobel layers to the binary features, which then are feeded into the fully connected layers for classification with the features learnt by the backbones. SBFM is lightweight and can be integrated into any popular backbone, e.g., VGG [11] and ResNet [12]. We integrate SBFM into VGG16 and ResNet34, respectively. The experimental results on CIFAR-10, TinyImageNet, and Cats-and-dogs datasets show the training accuracy, test accuracy, and training time of the SBFM integrated models are on a par with those of original ones, and when attacked by FGSM [1] with \(\epsilon=8/255\), the SBFM integrated models can achieve averagely 35% higher classification accuracy than the original ones, and 75% classification accuracy in CIFAR-10 and TinyImageNet datasets. The results demonstrate the binary features learnt by SBFM can enhance the robustness of DCNN models notably. The contributions of this paper can be summarized as follows: 1. We first raise the question "can a DCNN model learn certain features which are insensitive to small perturbations, and further defend itself no matter what attack methods are presented", and make a beginning effort on this. 2. SBFM is proposed to learn binary features through Sobel layer and threshold layer, which is lightweight and can be integrated into any popular backbone. 3. Experimental results on multiple datasets show, when attacked by FGSM with \(\epsilon=8/255\), the SBFM integrated models can achieve averagely 35% higher accuracy compared to the original ones, and 75% classification accuracy n CIFAR-10 and TinyImageNet datasets. ## II Related Work **Adversarial Training.** AT is a widely used approach to enhance the robustness of DCNN models by generating AEs during each training step. Tsipras _et al._[13] found there exists a tradeoff between robustness and standard accuracy of the models generated by AT, due to the robust classifiers learning a different feature representation compared to the clean classifiers. Madry _et al._[14] proposed an approximated solution framework to the optimization problems of AT, and found PGD-based AT can produce models defending themselves against the first-order adversaries. Kannan _et al._[15] investigated the effectiveness of AT at the scale of ImageNet [16], and proposed a logit pairing AT training method to tackle the tradeoff between robust accuracy and clean accuracy. Wong _et al._[17] accelerated the training process using FGSM attack with random initialization instead of PGD attack [18], and reached significantly lower cost. Xu _et al._[19] proposed a novel attack method which can make a stronger perturbation to the input images, resulting in the robustness of models by AT using this attack method is improved. Li _et al._[20] revealed a link between fast growing gradient of examples and catastrophic overfitting during robust training, and proposed a subspace AT method to mitigate the overfitting and increase the robustness. Dabouei _et al._[21] found the gradient norm in AT is higher than natural training, which hinders the training convergence of outer optimization of AT. And they proposed a gradient regularization method to improve the performance of AT. AT improves the robustness of the DCNN models by generating AEs using one specific attack method. However, in practice, the models trained by AT may be still vulnerable when facing the novel attack methods. **Part-based Model.** Li _eg al._[7] argued one reason that DCNN models are prone to be attacked is they are trained only on category labels, not on the part-based knowledge as humans do. They proposed a object recognition model, which first segments the parts of objects, scores the segmentations based on human knowledge, and final outputs the classification results based on the scores. This part-based model shows better robustness than classic recognition models across various attack settings. Sitawarin _et al._[8] also thought richer annotation information can help learn more robust features. They proposed a part segmentation model with a head classifier trained end-to-end. The model first segments objects into parts, and then makes predictions based on the parts. In both works, the shape-like features are learnt to enhance the robustness of DCNN models remarkably. However, the proposed part-based models require the detailed segmentation annotation of objects and prior knowledge, and are hard to be combined with the existing recognition or detection architectures. In this paper, we propose a binary feature learning module which can be integrated into any popular backbone and enhance the robustness of DCNN models notably. ## III Shallow Binary Feature Module In this section, we give a detailed description of SBFM, which is lightweight and can be integrated into any popular backbone. SBFM stacks two types of layers-Sobel layer and threshold layer. In Sobel layer, four parallel feature maps are learnt to represent horizontal, vertical, and diagonal features respectively. In threshold layer, it sets a threshold to turn the features from Sobel layers into binary features. The architecture of a SBFM integrated classification model is shown in Fig. 3. Fig. 2: **Thresholded edge images of Fig. 1.** **Sobel Layer.** A Sobel layer consists of four parallel feature maps, i.e., horizontal feature maps, vertical feature maps, positive diagonal feature maps, and negative diagonal feature maps. For each feature map, a constrained kernel is designed to ensure the corresponding features can be learnt. Taking kernel size of 3 by 3 for example, the four types of kernel are shown in Fig. 4. For horizontal kernel in Fig. 4, \[w_{ij}\left\{\begin{array}{ll}\in[0,\,1]&\text{if }i=1,\,j=1,\,2,\,3,\\ =0&\text{if }i=2,\,j=1,\,2,\,3,\\ \in[\text{-1},\,0]&\text{if }i=3,\,j=1,\,2,\,3,\end{array}\right. \tag{1}\] For vertical kernel in Fig. 4, \[w_{ij}\left\{\begin{array}{ll}\in[0,\,1]&\text{if }j=1,\,i=1,\,2,\,3,\\ =0&\text{if }j=2,\,i=1,\,2,\,3,\\ \in[\text{-1},\,0]&\text{if }j=3,\,i=1,\,2,\,3,\end{array}\right. \tag{2}\] For positive diagonal kernel in Fig. 4, \[w_{ij}\left\{\begin{array}{ll}\in[0,\,1]&\text{if }(i,j)\in\{(1,\,1), (1,\,2), (2,\,1)\,\},\\ =0&\text{if }(i,j)\in\{(1,\,3), (2,\,2), (3,\,1)\,\},\\ \in[\text{-1},\,0]&\text{if }(i,j)\in\{(2,\,3), (3,\,2), (3,\,3)\,\},\end{array}\right. \tag{3}\] For negative diagonal kernel in Fig. 4, \[w_{ij}\left\{\begin{array}{ll}\in[0,\,1]&\text{if }(i,j)\in\{(1,\,2), (1,\,3), (2,\,3)\,\},\\ =0&\text{if }(i,j)\in\{(1,\,1), (2,\,2), (3,\,3)\,\},\\ \in[\text{-1},\,0]&\text{if }(i,j)\in\{(2,\,1), (3,\,1), (3,\,2)\,\},\end{array}\right. \tag{4}\] The output of a Sobel layer is the addition of the four feature maps. **Threshold Layer.** Features computed from the last Sobel layer are feeded into threshold layer, which can be denoted as \(\textbf{x}^{in}\in R^{N\times P\times Q}\), where \(N\) is the number of channels of the feature map, and \(P\) and \(Q\) are the width and height of each channel respectively. We set a threshold for each channel, and the features output from the threshold layer, \(\textbf{x}^{out}\), can be computed using Eq. 5. \[x^{out}_{ikj}=\left\{\begin{array}{ll}1&\text{if }x^{in}_{ikj}\geq t\text{ * }\max(\textbf{x}^{in}_{i}),\\ 0&\text{if }x^{in}_{ikj}<t\text{ * }\max(\textbf{x}^{in}_{i}),\end{array}\right. \tag{5}\] where \(i\) = 1,2,...,\(N\), \(k\) = 1,2,...,\(P\), and \(j\) = 1,2,...,\(Q\). \(t\) \(\in\) [0, 1], and \(\max(\textbf{x}^{in}_{i})\) represents the maximum value of channel \(i\). It is obvious to see that, the higher \(t\), the less binary features obtained; the smaller \(t\), the more binary features obtained. ## IV Experiments In this section, we conduct experiments to examine whether the SBFM integrated VGG16 (VGG16-SBFM) and ResNet34 (ResNet34-SBFM) are more robust than the original ones on CIFAR-10 [22], TinyImageNet [23], and Cats-and-Dogs [24] datasets. Firstly, the effects of SBFM on model training are described. Secondly, the robustness of the SBFM integrated models and the original ones are compared under FGSM attack. And thirdly, the impacts of the number of Sobel layers and threshold on the robustness of the SBFM integrated models are analyzed. All experiments are coded using TensorFlow, and run on Intel i9 cpu of 3.6GHz with 64 GB RAM. One TITAN XP GPU is employed. ### _Datasets_ **CIFAR-10.** CIFAR-10 dataset includes 10 classes, each of which has 6,000 training images, and 1,000 test images. In experiments, 10% training images in each class are taken as validation images. The resolution of images is 32\(\times\) 32. **TinyImageNet.** TinyImageNet dataset includes 200 classes, each of which has 500 training images, and 50 validation images. In experiments, validation images are taken as test images, and 10% training images in each class are taken as validation images. The resolution of images is 64\(\times\) 64. **Cats-and-Dogs.** Cats-and-Dogs dataset includes 2 classes, each of which has 12,500 images. In experiments, 9,000, 1,000, and 2,500 images in each class are taken as training, validation, and test images, respectively. The resolution of images is resized to 224\(\times\) 224. Fig. 4: **Four types of kernels of Sobel layer.** Fig. 3: **The architecture of a SBFM integrated classification model.** ### _Effects of SBFM on model training_ We examine the effects of SBFM on model training by comparing three metrics between the SBFM integrated models and the original ones, i.e., training accuracy, test accuracy, and training time per epoch. In SBFM, there are generally two parameters to be set-one is the number of Sobel layers, denoted as \(l\), and the other is the proportion coefficient for threshold layer, denoted as \(t\). On CIFAR-10 dataset, \(l\) and \(t\) are set to 3 and 0.8, respectively, for both VGG16-SBFM and ResNet34-SBFM. On TinyImageNet and Cats-and-Dogs dataset, \(l\) and \(t\) are set to 2 and 0.8 for VGG16-SBFM, and 3 and 0.8 for ResNet34-SBFM, respectively. The loss function, optimizer, the batch size, and the number of epochs are set to be the same. Each model is run for five times, and the average values are recorded. Table I shows the comparison of training performance between SBFM integrated models and the original ones. From the table, it is clear to see the SBFM integrated models are on a par with the original models on three metrics, which indicates SBFM has no side effect on model training and is lightweight. ### _Performance comparison between SBFM integrated models and original models under FGSM attack_ We compare the classification accuracy between SBFM integrated models and original ones under FGSM attack, with \(\epsilon\) set to 0.1/255, 0.5/255, 1/255, 2/255, 3/255, 5/255, and 8/255. \(l\) and \(t\) of the SBFM integrated models are set to the same values in Sec. IV-B. Each model is run for five times, and the average value is recorded. Fig. 5-Fig. 7 show the changing curves of the classification accuracy for both SBFM integrated models and the original ones with the attack intensity \(\epsilon\) increasing. From the figures, we can see that, in both CIFAR-10 and TinyImageNet datasets, the changing curves of SBFM integrated models are much more gentle compared to the original ones with the increase of \(\epsilon\), showing the robustness is enhanced greatly after integrating SBFM. In Cats-and-Dogs dataset, the changing patterns of curves for both SBFM integrated models and the original ones are nearly the same, showing no notable advantage by SBFM for robustness enhancement. Here we make a note that, when training on Cats-and-Dogs dataset, the strides and pooling coefficients for SBFM are forced to set larger than those on CIFAR-10 and TinyImageNet datasets, due to the memory limitation. This may result in the poor performance of SBFM integrated models on Cats-and-Dogs dataset. Table II shows the classification accuracy at \(\epsilon\) = 8/255 for both SBFM integrated models and the original ones on three datasets. From the table, we can see that, on CIFAR-10, VGG16-SBFM is 47.43% higher accurate than VGG16, achieving 77.44% classification accuracy. ResNet34-SBFM is 43.46% higher accurate than ResNet34, achieving 66.41% classification accuracy. And on TinyImageNet dataset, VGG16-SBFM is 44.37% higher accurate than VGG16, achieving 72.38% classification accuracy. ResNet34-SBFM is 71.10% higher accurate than ResNet34, achieving 81.46% classification accuracy. The results demonstrate the binary Fig. 5: **Changing curve of classification accuracy with the strengthening of the attack intensity \(\epsilon\) on CIFAR-10.** Fig. 6: **Changing curve of classification accuracy with the strengthening of the attack intensity \(\epsilon\) on TinyImageNet.** features learnt by SBFM can enhance the robustness of DCNN models notably. Fig. 8-Fig. 11 show the AEs of \(\epsilon\) = 8/255 which can be classified correctly by the SBFM integrated models. ## V Discussions The experimental results from Section IV shows SBFM has no side effect on model training, and it is lightweight and can be integrated into any backbone. Furthermore, the binary features learnt by the SBFM can enhance the robustness of DCNN models notably. Here, we want to emphasize that, in this paper, a heuristic combination form of texture features and binary features (shown in Fig. 3) is introduced. We believe it is worthwhile to explore other effective and efficient combination forms of both features to enhance the robustness of DCNN models. Also, from the experimental results, it can be seen that, the performance of SBFM integrated models under attack changes distinctly for different parameter settings, and the generalization ability of the trained SBFM integrated models is relatively weak at certain parameter settings. It hints us that it is of great importance to design an optimization framework for Fig. 11: **Correctly classified adversarial examples of Cats-and-Dogs by ResNet34-SBFM.** Fig. 12: **SBFM with 1 Sobel layer, 2 Sobel layers, and 3 Sobel layers. HK stands for horizontal kernel. VK stands for vertical kernel. PDK stands for positive diagonal kernel. NDK stands for negative diagonal kernel.** parameter searching to yield the SBFM integrated models with exceptional classification accuracy under attack and satisfying generalization ability. ## VI Conclusions Enhancing the robustness of DCNN models is of great significance for the safety-critical applications in the real world. In this paper, we first raise the question "can a DCNN model learn certain features which are insensitive to small perturbations, and further defend itself no matter what attack methods are presented", and make a beginning effort to answer it. A shallow binary feature module is proposed, which is lightweight and can be integrated into any popular backbone. The experimental results on multiple datasets show the binary features learnt by the shallow binary feature module can enhance the robustness of DCNN models notably. In future's work, we endeavor to explore other effective and efficient combination forms of binary features and texture features, and design an optimization framework for the parameter searching to yield models with good performance under attack and generalization ability. The work in this paper shows it is promising to enhance the robustness of DCNN models through feature learning.
2308.12044
A multiobjective continuation method to compute the regularization path of deep neural networks
Sparsity is a highly desired feature in deep neural networks (DNNs) since it ensures numerical efficiency, improves the interpretability of models (due to the smaller number of relevant features), and robustness. For linear models, it is well known that there exists a \emph{regularization path} connecting the sparsest solution in terms of the $\ell^1$ norm, i.e., zero weights and the non-regularized solution. Very recently, there was a first attempt to extend the concept of regularization paths to DNNs by means of treating the empirical loss and sparsity ($\ell^1$ norm) as two conflicting criteria and solving the resulting multiobjective optimization problem for low-dimensional DNN. However, due to the non-smoothness of the $\ell^1$ norm and the high number of parameters, this approach is not very efficient from a computational perspective for high-dimensional DNNs. To overcome this limitation, we present an algorithm that allows for the approximation of the entire Pareto front for the above-mentioned objectives in a very efficient manner for high-dimensional DNNs with millions of parameters. We present numerical examples using both deterministic and stochastic gradients. We furthermore demonstrate that knowledge of the regularization path allows for a well-generalizing network parametrization. To the best of our knowledge, this is the first algorithm to compute the regularization path for non-convex multiobjective optimization problems (MOPs) with millions of degrees of freedom.
Augustina C. Amakor, Konstantin Sonntag, Sebastian Peitz
2023-08-23T10:08:52Z
http://arxiv.org/abs/2308.12044v5
# A multiobjective continuation method ###### Abstract Sparsity is a highly desired feature in deep neural networks (DNNs) since it ensures numerical efficiency, improves the interpretability of models (due to the smaller number of relevant features), and robustness. In machine learning approaches based on linear models, it is well known that there exists a connecting path between the sparsest solution in terms of the \(\ell^{1}\) norm (i.e., zero weights) and the non-regularized solution, which is called the regularization path. Very recently, there was a first attempt to extend the concept of regularization paths to DNNs by means of treating the empirical loss and sparsity (\(\ell^{1}\) norm) as two conflicting criteria and solving the resulting multiobjective optimization problem. However, due to the non-smoothness of the \(\ell^{1}\) norm and the high number of parameters, this approach is not very efficient from a computational perspective. To overcome this limitation, we present an algorithm that allows for the approximation of the entire Pareto front for the above-mentioned objectives in a very efficient manner. We present numerical examples using both deterministic and stochastic gradients. We furthermore demonstrate that knowledge of the regularization path allows for a well-generalizing network parametrization. ## Introduction Machine Learning (ML) and in particular deep neural networks (DNNs) are nowadays an integral part of numerous applications such as data classification, image recognition, time series prediction, and language processing. Their importance continues to grow at great speed across numerous disciplines and applications, and the increase in available computational capacity allows for the construction of larger models. However, these advances also increase the challenges regarding the construction and training of DNNs, e.g., the required training data, the training efficiency, and the adaptability to changing external factors. This leads to the task of simultaneously fulfilling numerous, sometimes contradictory goals in the best possible way. Multiobjective optimization (MOO) addresses the problem of optimizing several conflicting objectives. The issue of having to trade between multiple, conflicting criteria is a universal problem, such as the need to have an optimal tradeoff between cost and quality in a production process. In a similar manner, conflicting criteria occur in various ways in ML. The main task of MOO is thus to identify the set of optimal trade-off solutions (the _Pareto set_) between these conflicting criteria. This concerns multitask problems [14], but also the training itself, where we want to minimize the training loss, obtain sparse models and improve generalization. Interestingly, we found that many papers on multicriteria machine learning do not address the true nature of multiobjective optimization. The reason for this, is that when choosing very large neural network architectures or even considering task-specific layers [14], the different tasks are no longer conflicting. The network is simply too powerful such that both task can be optimally handled simultaneously and there is no tradeoff. From an optimization point of view, the Pareto front collapses into a single point. However, considering the strongly growing carbon footprint of AI [10], there is a growing interest in smaller models that are better tailored to specific tasks. This is why we propose the use of models that are smaller and adapted to a certain task. While this will reduce the general applicability, multicriteria optimization can help us to determine a set of compromising network architectures, such that we can adapt networks to specific situations online (multicriteria decision making). The joint consideration of loss and \(\ell^{1}\) regularization is well-studied for linear systems. However, it is much less understood for the nonlinear problems that we face in deep learning. When treating the \(\ell^{1}\) regularization problem as a multiobjective optimization problem (MOP), a popular approach to obtain the entire solution set (the _Pareto set_) is via continuation methods [15, 16]. These continuation methods usually consist of a predictor step (along the tangent direction of the Pareto set) and a corrector step that converges to a new point on the Pareto set close by. However, as the \(\ell^{1}\) norm is non-smooth, classical manifold continuation techniques fail. Due to this fact, a first extension of regularization paths from linear to nonlinear models was presented only very recently in [1], where continuation methods were extended to non-smooth objective functions. Although this extension provides a rigorous treatment of the problem, it results in a computationally expensive algorithm, which renders it impractical for DNNs of realistic dimensions. As research on regularization paths for nonlinear loss functions has focused on small-scale learning problems until now, this work is concerned with large-scale machine learning problems. The main contributions of this paper include: * The extension of regularization paths from linear models to high dimensional nonlinear deep learning problems. * The demonstration of the usefulness of MOO continuation methods for the generalization properties of DNNs. * A step towards more resource-efficient ML by beginning with very sparse networks and slowly increasing the number of weights. This is in complete contrast to the standard pruning approaches, where we start very large and then reduce the number of weights. A comparison of our approach is further made with the standard approach of weighting the individual losses using an additional hyperparameter. The remainder of the paper is organized as follows. First, we discuss related works, before introducing our continuation method. We then present a detailed discussion of our extensive numerical experiments. ## Related works ### Multiobjective optimization In the last decades, many approaches have been introduced for solving nonlinear [11], nonsmooth [12], non-convex [11], or stochastic [12] multiobjective optimization problems, to name just a few special problem classes. In recent years, researchers have further begun to consider multiobjective optimization in machine learning [21, 22, 23] and deep learning [24, 25]. We provide an overview of the work that is most pertinent to our own and direct readers to the works done by [24] and [2]. Ehrgott2008 described MOO as the process of optimizing multiple, sometimes conflicting objectives simultaneously by finding the set of points that are _not dominated_1 by any other feasible points. Different methods have been proposed to obtain these Pareto optimal solutions such as _evolutionary algorithms_ which use the evolutionary principles inspired by nature by evolving an entire population of solutions [1]. _Gradient-based techniques_ extend the well-known gradient techniques from single-objective optimization to multiple criteria problems [10]. _Set Oriented Methods_ provide an approximation of the Pareto set through box coverings and often suffer from the curse of dimensionality which makes applications in ML very expensive [13]. Another deterministic approach are _scalarization methods_ which involve the transformation of MOP into (a series of) single objective optimization problems [1]. These can then be solved using standard optimization techniques. Examples include the weighted sum method, epsilon constraint, Chebyshev scalarization, goal programming, min-max and augmented epsilon constraint method [12, 23]. Some drawbacks exist for the latter methods, most notably the incapability of some to handle non-convex problems, as well as the challenge to obtain equidistant coverings. Notably, these drawbacks are most severe in the weighted sum method, which is by far the most widely applied approach in ML when considering multiple criteria at the same time (such as regularization terms). Moreover, the weighting has to be made a priori, which makes selecting useful trade-offs much harder [10, 24]. Footnote 1: A point is said to _dominate_ another point if it performs at least as good as the other point across all objectives and strictly better than it in at least one objective. It is referred to as a _Pareto optimal_ or efficient point. _Continuation methods_ are another approach for the computation of the solutions of MOPs. These methods were initially developed to solve complex problems starting from simple ones, using homotopy approaches2. Also, they were used in predictor-corrector form to track manifolds [11, 12]. In the multiobjective setting, one can show that the Pareto set is a manifold as well3, if the objectives are sufficiently regular [11]. Hence, the continuation method then becomes a technique to follow along the Pareto set in two steps; **a)** predictor step along the tangent space of the Pareto set which forms in the smooth setting a manifold of dimension _m - I_ (where \(m\) is the number of objective functions) [12] and **b)** a corrector step that obtains the next point on the Pareto set which is often achieved using multiobjective gradient descent. Schutze, Dell'Aere, and Dellnitz (2005) used the predictor-corrector approach to find points that satisfy the Karush-Kuhn-Tucker (KKT) condition and to further identify other KKT points in the neighbourhood. Footnote 2: These are approaches that involve starting at a simple-to-calculate solution and then continuously vary some parameter to increase the problem difficulty step by step, until finally arriving at the solution of the original problem, which is often very hard to compute directly [12]. Footnote 3: To be more precise, the set of points satisfying the Karush-Kuhn-Tucker (KKT) necessary conditions for optimality along with the corresponding KKT multipliers form a manifold of dimension \(m-1\). The continuation method has further been extended to regularization paths in machine learning (ML) for linear models [13, 12]. Recently, regularization paths for nonlinear models have been introduced although limited to small dimensions since dealing with non-smoothness is difficult [10]. ### Multicriteria machine learning In the context of multicriteria machine learning, several approaches have been used such as _evolutionary algorithms_[1, 1, 12] or _gradient-based methods_[24, 12]. However, only a few of these methods address high-dimensional deep learning problems or attempts to compute the entire Pareto front. Furthermore, as discussed in the introduction, many researchers have introduced architectures that are so powerful that even with the inclusion of the task-specific parts, the Pareto front collapses into a single point (e.g., Sener and Koltun (2018)). As we want to pursue a more sustainable path here, we are interested in truly conflicting criteria. The entire regularization path for DNNs (i.e., a MOP with training loss versus \(\ell^{1}\) norm) was computed in (Bieker, Gebken, and Peitz 2022). However, even though the algorithm provably yields the Pareto front, the computation becomes intractable in very high dimensions. Hence, the need to develop an efficient method to find the entire regularization path and Pareto front for DNNs. For large-scale MOO, gradient-based methods have proven to be the most efficient. Examples are the steepest descent method (Fliege and Svaiter 2000), projected gradient method (Drummond and Iusem 2004), proximal gradient method (Tanabe, Fukuda, and Yamashita 2019) and recently accelerated proximal gradient (Tanabe, Fukuda, and Yamashita 2022). Previous approaches to MOO problems often assume smoothness/differentiability of the objective functions but the \(\ell^{1}\) norm is not differentiable, so we use the multiobjective proximal gradient (MPG) to ensure convergence. MPG has been described by Tanabe, Fukuda, and Yamashita (2019) as a descent method for unconstrained MOPs where each objective function can be written as the sum of a convex and smooth function, and a proper, convex, and lower semicontinuous but not necessarily differentiable one. Simply put, MPG combines the proximal point and the steepest descent method and is a generalization of the iterative shrinkage-thresholding algorithms (ISTA) (Combettes and Wajs 2005; Beck and Teboulle 2009) to multiobjective optimization problems. ## Continuation Method ### Some basics on multiobjective optimization A multiobjective optimization problem can be mathematically formalized as \[\min_{\theta\in\mathbb{R}^{n}}\left[\begin{array}{c}F_{1}(\theta)\\ \vdots\\ F_{m}(\theta)\end{array}\right],\] (MOP) where \(F_{i}:\mathbb{R}^{n}\rightarrow\mathbb{R}\ \forall\ i=1,\ldots,m\) are the generally conflicting objective functions and \(\theta\) the parameters we are optimizing over. In general, there does not exist a single point that minimizes all criteria simultaneously, the solution to MOP is the entire set of optimal compromises, the so-called Pareto set. **Definition 1** (Miettinen (1998)): _Consider the optimization problem (MOP)._ 1. _A point_ \(\theta^{*}\in\mathbb{R}^{n}\) _is_ Pareto optimal _if there does not exist another point_ \(\theta\in\mathbb{R}^{n}\) _such that_ \(f_{i}(\theta)\leq f_{i}(\theta^{*})\) _for all_ \(i=1,\ldots,m\)_, and_ \(f_{j}(\theta)<f_{j}(\theta^{*})\) _for at least one index_ \(j\)_. The set of all Pareto optimal points is the_ Pareto set_, which we denote by_ \(P\)_. The set_ \(F(P)\subset\mathbb{R}^{m}\) _in the image space is called the_ Pareto front_._ 2. _A point_ \(\theta^{*}\in\mathbb{R}^{n}\) _is_ weakly Pareto optimal _if there does not exist another point_ \(\theta\in\mathbb{R}^{n}\) _such that_ \(f_{i}(\theta)<f_{i}(\theta^{*})\) _for all_ \(i=1,\ldots,m\)_. The set of all weakly Pareto optimal points is the_ weak Pareto set_, which we denote by_ \(P_{w}\) _and the set_ \(F(P_{w})\) _is the_ weak Pareto front_._ The above definition is not very helpful for gradient-based algorithms. In this case, we need to rely on first order optimality conditions. **Definition 2** (Hilermeier (2001)): _A point \(\theta^{*}\in\mathbb{R}^{n}\) is called Pareto critical if there exist an \(\alpha\in\mathbb{R}^{m}\) with \(\alpha_{i}\geq 0\) for all \(i=1,\ldots,m\) and \(\sum_{i=1}^{m}\alpha_{i}=1\) satisfying_ \[\sum_{i=1}^{m}\alpha_{i}\nabla F_{i}(\theta^{*})=0. \tag{1}\] _Equation (1) is generally referred to as the Karush-Kuhn-Tucker (KKT) condition._ **Remark 3**: _All Pareto optimal points are Pareto critical but the reverse is in general, not the case. If all objective functions are convex then each Pareto critical point is weakly Pareto optimal._ In this work, we consider two objective functions, namely the empirical loss and the \(\ell^{1}\) norm of the neural network weights. The Pareto set connecting the individual minima (at least locally), is also known as the regularization path. In terms of a MOO, we are looking for the Pareto set of \[\min_{\theta\in\mathbb{R}^{n}}\left[\begin{array}{c}\mathbb{E}_{(x,y) \sim\mathcal{D}}[\mathcal{L}(f(\theta,x),y)]\\ \frac{1}{n}\|\theta\|_{1}\end{array}\right],\] (MoDNN) where \((x,y)\in\mathcal{X}\times\mathcal{Y}\) is labeled data following a distribution \(\mathcal{D}\), the function \(f:\mathbb{R}^{n}\times\mathcal{X}\rightarrow\mathcal{Y}\) is a parameterized model and \(\mathcal{L}(\cdot,\cdot)\) denotes a loss function. The second objective is the weighted \(\ell^{1}\) norm \(\frac{1}{n}\|\theta\|_{1}\) to ensure sparsity as in the well known LASSO. Our goal is to solve (MoDNN), by which we are able to obtain the regularization path. However, this problem is challenging due to the fact that the \(\ell^{1}\) norm is not differentiable. In order to solve this problem, we explicitly take the non-smoothness of the second objective into account, which is done by using a proximal gradient method (Tanabe, Fukuda, and Yamashita 2019). **Remark 4**: _A common approach to solve the problem (MOP) is the use of the weighted sum method. Which includes all types of regularization when considering multiple criteria in DNN training terms. E.g.,_ \[F(\theta)=\sum_{i=1}^{m}\lambda_{i}F_{i}(\theta), \tag{2}\] _with \(\lambda_{i}\geq 0\) and \(\sum_{i=1}^{m}\lambda_{i}=1\), where \(\lambda\) is an additional hyperparameter and we solve \(\min_{\theta\in\mathbb{R}^{n}}F(\theta)\)._ ### Proximal gradient method Given functions of the form \(F_{i}=f_{i}+g_{i}\), such that \(f_{i}:\mathbb{R}^{n}\rightarrow\mathbb{R}\) is convex and smooth, and \(g_{i}\) is convex and non-smooth with computable proximal operator \(\text{prox}_{g}\), **Definition 5**: _proximal operator;_ _Given a convex function \(g:\mathbb{R}^{n}\rightarrow\mathbb{R}\), the proximal operator is expressed as_ \[\text{prox}_{g}(\theta)=\operatorname*{arg\,min}_{\phi\in\mathbb{R}^{n}}\left\{g (\phi)+\frac{1}{2}\|\phi-\theta\|_{2}^{2}\right\}\] Tanabe, Fukuda, and Yamashita (2019) proposed an algorithm that aims at efficiently solving the resulting MOP using _Algorithm 1_. **Theorem 6** (Tanabe, Fukuda, and Yamashita (2019)): _Let \(f_{i}\) be convex with \(L\)-Lipschitz continuous gradient and let \(g_{i}\) be proper, convex and lower semi continuous for all \(i=1,\ldots,m\). Then, every accumulation point of the sequence \(\{\theta^{k}\}\) computed by Algorithm 1 is Pareto critical. In addition, if the level set of one \(F_{i}\) is bounded, the sequence \(\{\theta^{k}\}\) has at least one accumulation point._ **Remark 7**: _Proximal gradient iterations are "forward-backward" iterations, "forward" referring to the gradient step and "backward" referring to the proximal step. The proximal gradient method is designed for problems where the objective functions include a smooth and a nonsmooth component, which is suitable for optimization with a sparsity-promoting regularization term._ **Remark 8**: _In the problem (MoDNN) we are in the situation \(F_{1}(\theta)=f_{1}(\theta)=\mathbb{E}_{(x,y)\sim\mathcal{D}}[\mathcal{L}(f( \theta,x),y)]\) and \(F_{2}(\theta)=g_{2}(\theta)=\frac{1}{n}\|\theta\|_{1}\). The proximal operator \(\text{prox}_{\frac{1}{n}\|\cdot\|_{1}}(\theta)\) has a simple closed form. This allows for an efficient implementation of Algorithm 1 as described in (Tanabe, Fukuda, and Yamashita 2022)._ ### A predictor-corrector method This section describes our continuation approach to compute the entire Pareto front of problem (MoDNN). _Figure 1_ shows an exemplary illustration of the Pareto front approximated by a finite set of points that are computed by consecutive predictor and corrector steps. After finding an initial point \(\theta^{0}\) on the Pareto front, we proceed along the front. This is done by first performing a predictor step in a suitable direction. As this will take us away from the front (Hilermeier, 2001; Bieker, Gebken, and Peitz, 2022), we need to perform a consecutive corrector step that takes us back to the front. Predictor stepDepending on the direction we want to proceed in, we perform a predictor step simply by performing a gradient step or proximal point operator step. Note that this is a deviation from the classical concept of continuation methods as described previously, where we compute the tangent space of a manifold. However, due to the nonsmoothness, the Pareto set of our problem does not possess such a manifold structure, which means that we cannot rely on the tangent space. Nevertheless, this is not necessarily an issue, as the standard approach would require Hessian information, which is too expensive in high-dimensional problems, anyway. The approach presented here is significantly more efficient, even though it may come at the cost that the predictor step is sub-optimal. Despite this fact, we found in our numerical experiments that our predictor nevertheless leads to points close enough for the corrector to converge in a small number of iterations. We formulate in equations (3) and (4) the two different steps that can be computed during the predictor step, depending on the direction we want to proceed in. (3) illustrates the gradient step taken for the loss objective function (i.e., "move left" in _Figure 1_) and (4) represents the shrinkage performed on the \(\ell^{1}\) norm objective function (i.e., "move down" in _Figure 1_). \[\theta^{k+1} =\theta^{k}-\eta\nabla_{\theta}\mathbb{E}_{(x,y)\sim\mathcal{D} }[\mathcal{L}(f(\theta^{k},x),y)] \tag{3}\] \[\theta^{k+1} =\text{prox}_{\eta\|\cdot\|_{1}}(\theta^{k}) \tag{4}\] Corrector stepFor the corrector step, we simply perform a multiobjective gradient descent step using the multiobjec Figure 1: Continuation method; The predictor steps are shown in black and blue for Eq. (3) and Eq. (4) respectively.The red is the corrector step. tive proximal gradient method (MPG). As our goal is to converge to a point on the Pareto front, this step is identical for both predictor directions. Both the predictor and corrector steps for both directions of movement along the front are illustrated in _Algorithm 2_ where \(f_{1}\) is the loss and \(g_{2}\) is the \(\ell^{1}\) norm function. ## Numerical experiments In this section we present numerical experiments for our algorithm and the resulting improvements in comparison to the much more widely used weighted sum approach mentioned in _equation_ (2), when considering multiple criteria in DNN training. In real-world problems, uncertainties and unknowns exist. Our previous approach of using the full gradient on every optimization step taken, oftentimes makes it computationally expensive, infeasible in some situations and unable to handle uncertainties [17]. A stochastic approach is included in our algorithm to take care of these limitations. The stochasticity is applied by computing the gradient on mini-batches of the data, hence randomness is introduced. ### Experimental settings To evaluate our algorithm, we first perform experiments on the Iris dataset [13]. Although this dataset is by now quite outdated, it allows us to study the deterministic case in detail. We then extend our experiments to the well-known MNIST dataset [1] using the stochastic gradient approach. The Iris dataset contains 150 instances, each of which has 4 features and belongs to one of 3 classes. The MNIST dataset contains 70000 images, each having 784 features and belonging to one of 10 classes. We split the datasets into training and testing sets in a 80-20 ratio. We use a dense linear neural network architecture with two hidden layers, where each layer has 4 neurons for the Iris dataset and 20 neurons for the MNIST data and the Rectified Linear Unit (ReLU) activation function for both layers. Cross-Entropy is used as the loss function. We compare the results from our algorithm against the weighted sum algorithm in _equation_ (2), where \(\lambda\) is the weighting parameter. We choose 44 different values for \(\lambda\), chosen equidistantly on the interval \([0,1]\) and solve the resulting problems using the ADAM optimizer [14]. The experiments are carried out on a machine with 2.10 GHz 12th Gen Intel(R) Core(TM) i7-1260P CPU and 32 GB memory, using Python 3.8.8. The source code is available at [https://github.com/aamakor/continuation-method](https://github.com/aamakor/continuation-method). To illustrate the behaviour of our algorithm, we first study the Iris dataset in a deterministic setting. To obtain a baseline, we have executed _Algorithm 2_ using very small step sizes. Interestingly, the Pareto set and front consist of multiple components, which we were only able to find by repeated application of the continuation method with random initial conditions (multi-start). The resulting solution is shown in blue in _Figure 2_, where three connected components of the Pareto critical points are clearly visible. As this approach is much too expensive for a realistic learning setting (the calculations took close to a day for this simple problem), we compare this to a more realistic setting in terms of step sizes. The result is shown via the red symbols. Motivated by our initial statement on more sustainable network architectures, we have initialized our network with all weights being very close to zero (the black "**"** in _Figure 2_) and then proceed along the front towards a less sparse but more accurate architecture.4 Footnote 4: Due to symmetries, an initialization with all zeros poses problems in terms of which weights to activate, etc., see [1] for details. As we do not need to compute every neural network parametrization from scratch, but use our predictor-corrector scheme, the calculation of each individual point on the front is much less time-consuming than classical DNN training. Moreover, computing the slope of the front from consecutive points allows for the online detection of relevant regions. Very small or very large values for the slope indicate that small changes in one objective lead to a large improvement in the other one, which is usually not of great interest in applications. Moreover, this can be used as an early stopping indicator to avoid overfitting, as we will see in the next example. ## Results Motivated by these promising results, we now study the MNIST data set in a stochastic setting (i.e., using mini batches for the loss function). To obtain the initial point on the Pareto front using _Algorithm 1_, we perform 500 iterations. For the subsequent steps, 7 iterations are used for a predictor step and 20 for a corrector step repeated 43 times i.e., \(N_{cont}=44\). _Figure 3 (a)_ and _(b)_ show the Pareto front and accuracy versus \(\ell^{1}\) norm, respectively. In this setting, we have started with a point in the middle of the front in blue and then apply the continuation method twice (once in each direction). As indicated above, we observe overfitting for non-sparse architectures, which indicates that we do not necessarily have to pursue the regularization path until the end, but we can stop once the slope of the Pareto front becomes too steep. This provides an alternative training proce Figure 2: Pareto front approximation for the Iris dataset using _Algorithm 2_ (red symbols) versus the reference Pareto front in “blue” (computed using the same algorithm with very small step sizes and many different initial conditions) with unscaled \(\ell^{1}\) norm. dure for DNNs where in contrast to pruning we start sparse and then get less sparse only as long as we don't run into overfitting. _Figure 4_ shows the comparison of our results to the weighted sum (WS) approach in _equation_ (2). For the WS approach, we compute 200 epochs per point on the front repeated 44 times i.e., \(N_{\lambda}=44\) equidistantly distributed weights \(\lambda\). In total the WS approach needs 5 times as many epochs as the continuation method (_Algorithm 2_) and therefore 5 times as many forward and backward passes. We see also, that the WS approach results in clustering around the unregularized and very sparse solutions, respectively. In contrast, we obtain a similar performance with a much sparser architecture using continuation. This shows the superiority of the continuation method and also highlights the randomness associated with simple regularization approaches when not performing appropriate hyperparameter tuning. Finally, we note that once an initial point on the Pareto front for predictor-corrector method (CM) is identified, fewer iterations are required when compared with the WS method. ## Conclusion We have presented an extension of regularization paths from linear models to high-dimensional nonlinear deep learning models. This was achieved by extending well-known continuation methods from multiobjective optimization to non-smooth problems and by introducing more efficient predictor and corrector steps. The resulting algorithm shows a performance that is suitable for high-dimensional learning problems. Moreover, we have demonstrated that starting with sparse models can help to avoid overfitting and significantly reduce the size of neural network models. Due to the small training effort of consecutive points on the Pareto front, this presents an alternative, structured way to deep neural network training, which pursues the opposite direction than pruning methods do. Starting sparse, we increase the number of weights only as long as we do not obtain a too-steep Pareto front, as this suggests overfitting. For future work, we will extend our results to even higher-dimensional problems (e.g., CIFAR-10), as well as consider more than two objectives. ## Acknowledgements This work was supported by the German Federal Ministry of Education and Research (BMBF) funded AI junior research group "Multicriteria Machine Learning".
2303.11617
Adaptive quadratures for nonlinear approximation of low-dimensional PDEs using smooth neural networks
Physics-informed neural networks (PINNs) and their variants have recently emerged as alternatives to traditional partial differential equation (PDE) solvers, but little literature has focused on devising accurate numerical integration methods for neural networks (NNs), which is essential for getting accurate solutions. In this work, we propose adaptive quadratures for the accurate integration of neural networks and apply them to loss functions appearing in low-dimensional PDE discretisations. We show that at opposite ends of the spectrum, continuous piecewise linear (CPWL) activation functions enable one to bound the integration error, while smooth activations ease the convergence of the optimisation problem. We strike a balance by considering a CPWL approximation of a smooth activation function. The CPWL activation is used to obtain an adaptive decomposition of the domain into regions where the network is almost linear, and we derive an adaptive global quadrature from this mesh. The loss function is then obtained by evaluating the smooth network (together with other quantities, e.g., the forcing term) at the quadrature points. We propose a method to approximate a class of smooth activations by CPWL functions and show that it has a quadratic convergence rate. We then derive an upper bound for the overall integration error of our proposed adaptive quadrature. The benefits of our quadrature are evaluated on a strong and weak formulation of the Poisson equation in dimensions one and two. Our numerical experiments suggest that compared to Monte-Carlo integration, our adaptive quadrature makes the convergence of NNs quicker and more robust to parameter initialisation while needing significantly fewer integration points and keeping similar training times.
Alexandre Magueresse, Santiago Badia
2023-03-21T06:26:19Z
http://arxiv.org/abs/2303.11617v3
Adaptive Quadratures for Nonlinear Approximation of Low-Dimensional PDEs using Smooth Neural Networks ###### Abstract. Physics-informed neural networks (PINNs) and their variants have recently emerged as alternatives to traditional partial differential equation (PDE) solvers, but little literature has focused on devising accurate numerical integration methods for neural networks (NNs), which is essential for getting accurate solutions. In this work, we propose adaptive quadratures for the accurate integration of neural networks and apply them to loss functions appearing in low-dimensional PDE discretisations. We show that at opposite ends of the spectrum, continuous piecewise linear (CPWL) activation functions enable one to bound the integration error, while smooth activations ease the convergence of the optimisation problem. We strike a balance by considering a CPWL approximation of a smooth activation function. The CPWL activation is used to obtain an adaptive decomposition of the domain into regions where the network is almost linear, and we derive an adaptive global quadrature from this mesh. The loss function is then obtained by evaluating the smooth network (together with other quantities, e.g., the forcing term) at the quadrature points. We propose a method to approximate a class of smooth activations by CPWL functions and show that it has a quadratic convergence rate. We then derive an upper bound for the overall integration error of our proposed adaptive quadrature. The benefits of our quadrature are evaluated on a strong and weak formulation of the Poisson equation in dimensions one and two. Our numerical experiments suggest that compared to Monte-Carlo integration, our adaptive quadrature makes the convergence of NNs quicker and more robust to parameter initialisation while needing significantly fewer integration points and keeping similar training times. \({}^{\dagger}\)School of mathematics, Monash university, Clayton, Victoria 3800, Australia \({}^{\sharp}\) Centre Internacional de Metodes Numerics a l'Enginveria, Campus Nord, 08034 Barcelona, Spain _E-mail addresses: [email protected], [email protected]._ _Date:_ March 23, 2023. _Key words and phrases._ adaptive quadrature, integration, PDE, neural networks. \({}^{*}\)Corresponding author. ## 1. Introduction In the last years, the use of Neural Networks (NNs) to approximate partial differential equations (PDEs) has been widely explored as an alternative to standard numerical methods such as the Finite Element Method (FEM). The idea behind these methods is to represent the numerical solution of a PDE as the realisation of a NN. The parameters of the NN are then optimised to minimise a form of the residual of the PDE or an energy functional. The original Physics-Informed Neural Network (PINN) method [1] considers the minimisation of the PDE in the strong form. Other frameworks recast the PDE as a variational problem and minimise the weak energy functional, like in Variational PINNs (VPINNs, [2]), the Deep Ritz Method (DRM, [3]) and the Deep Nitsche Method (DNM, [4]). **Comparison between the FEM and NNs.** NNs activated by the \(\mathrm{ReLU}\) function (Rectified Linear Unit) constitute a valuable special case, as they can emulate linear finite elements spaces on a mesh that depends on the parameters of the network [5]. This enables one to draw close links between the well-established theory of the FEM and the recently introduced neural frameworks [6]. There are additional benefits to using the ReLU activation function, such as addressing the issue of exploding or vanishing gradients and being computationally cheap to evaluate. However, the numerical experiments in [7] suggest that the theoretical approximation bounds for \(\mathrm{ReLU}\) networks are not met in practice: even though there exist network configurations that realise approximations of a given accuracy, the loss landscape is rough because of the low regularity of \(\mathrm{ReLU}\). As a result, gradient-based optimisation algorithms are largely dependent on the initial parameters of the network and may fail to converge to a satisfying local minimum. It was also shown in [8] that training a network with a smooth activation function helps the network converge. Compared to the FEM, NNs can be differentiated thanks to automatic differentiation rather than relying on a numerical scheme (and ultimately a discretisation of the domain). Another advantage of NNs is that they are intrinsically adaptive, in the sense that contrary to traditional PDE solvers that rely on a fixed mesh and fixed shape functions, NNs can perform nonlinear approximation. Finally, the machine learning approach proposes a unified setting to handle forward and inverse problems that can involve highly nonlinear differential operators. However, properly enforcing initial and boundary conditions remains one of the major challenges for NNs [9], especially when the domain has a complex geometry. **Numerical integration of NNs.** Current efforts in the literature are mainly focused on developing novel model architectures or solving increasingly complex PDEs, while there are comparatively few results on the convergence, consistency and stability of PINNs and related methods. These are the three main ingredients that provide theoretical guarantees on the approximation error, and they remain to be established for NNs. In particular, the numerical quadratures that are used to evaluate the loss function have mostly been investigated experimentally, and the link between the integration error of the loss function, often referred to as the generalisation gap, and the convergence of the network is still not fully understood. The overwhelming majority of implementations of NNs solvers rely on Monte Carlo (MC) integration because it is relatively easy to implement. However, it is known that its convergence rate with respect to the number of integration of points is far from optimal in low-dimensional settings. For example, the integration error decays as \(n^{-2}\) for the trapezoidal rule in dimension one, whereas it is of the order of \(n^{-1/2}\) for MC regardless of the dimension. Furthermore, local behaviours of the function to integrate are generally poorly handled by MC, because the points are sampled uniformly across the domain. A few recent works have considered more advanced integration methods for NN solvers, including Gaussian quadratures or quasi-MC integration. Some studies contemplate resampling the MC points during the training or adding more points where the strong, pointwise residual is larger. We refer the reader to [10] for a comprehensive review of sampling methods and pointwise residual-based resampling strategies applied to NNs. They show that the convergence of the model is significantly influenced by the choice of the numerical quadrature. A theoretical decay rate of the generalisation gap when the training points are carefully sampled was shown in [11], while [12] proved the consistency of PINNs and related methods in the sample limit. Finally, the generalisation error was bounded with the sum of the training error and the numerical integration error in [13]. However, this result requires strong assumptions on the solution and the numerical quadratures, and the constants involved in the bound are not explicit and may not be under control. It was shown in [14] that "quadrature errors can destroy the quality of the approximated solution when solving PDEs using deep learning methods". The authors explored the use of a Continuous Piecewise Linear (CPWL) interpolation of the output of the network and observed better convergence compared to MC integration. Their experiments were performed in dimension one, and the networks were interpolated on fixed, uniform meshes. **Motivations and contributions.** Our work aligns with current research in the theory and practice of NNs applied to approximating PDEs. We seek to equip NNs with some of the tools of the FEM in terms of numerical integration, for low-dimensional (\(d\leq 3\)) problems arising from computational physics or engineering. The space of CPWL activation functions is of great interest for NNs owing to its structure of vector space and its stability by composition. This means that whenever a network is activated by a CPWL function, its realisation is CPWL too. Our idea is that provided that we can decompose the domain at hand into regions where the network is almost linear in each cell, we can integrate with high accuracy any integral that involves the network and its partial derivatives by relying on Gauss-like quadratures of a sufficiently high order. The linear and bilinear forms are decomposed on this mesh in a similar way as in the FEM, except that the underlying mesh is adaptive instead of fixed. We provide an algorithm to obtain such a decomposition of the domain based on the expression of the activation function and the parameters of the network. Nevertheless, as will be shown in Sect. 3, CPWL activations are not smooth enough for the optimisation problem of a NN approximating a PDE to be well-behaved. If the activation function is CPWL, it needs to be regularised so that the network can be properly trained. In this work, all the networks that we train are activated by smooth functions. However, the quadrature points and the mesh are obtained from the projection of this smooth network onto a CPWL space, when the activation function is replaced by a CPWL approximation of the smooth activation. We design a procedure to obtain a CPWL approximation of a smooth function on \(\mathbb{R}\) as a whole. Our extensive numerical experiments demonstrate that our proposed integration method enables the NN to converge faster and smoother, to lower generalisation errors compared to MC integration, and to be more robust to the initialisation of its parameters while keeping similar training times. **Outline.** We recall the general setting of PINN and its variants and introduce some notations related to the variational problem we consider in Sect. 2. In Sect. 3, after explaining why NNs activated by CPWL functions are not suitable to approximate PDEs, we suggest several smooth approximations of \(\mathrm{ReLU}\). In Sect. 4, we introduce a class of activation functions that behave linearly at infinity, propose a method to approximate them by CPWL functions on \(\mathbb{R}\) as a whole, and prove its quadratic convergence rate. Then, in Sect. 5, we provide an algorithm to decompose the physical domain into convex regions where the NN is almost linear. We also describe how to create Gaussian quadratures for convex polygons and analyse the integration error of our method. We present our numerical experiments in Sect. 6. We solve the Poisson equation in dimensions one and two on the reference segment and square, but also on a more complex domain, to show that our adaptive quadrature can adapt to arbitrary polygonal domains. We discuss our findings and conclude our work in Sect. 7. In order to keep the article more readable, we decided to write the proofs of our propositions in Apdx. A and our algorithms in Apdx. B. ## 2. Preliminaries Consider the boundary condition problem \[\left\{\begin{array}{ll}\mathcal{D}u=f&\text{in }\Omega\\ \mathcal{B}u=g&\text{on }\Gamma.\end{array}\right., \tag{1}\] where \(\Omega\) is a domain in \(\mathbb{R}^{d}\) with boundary \(\Gamma\), \(\mathcal{D}\) is a domain differential operator, \(f:\Omega\to\mathbb{R}\) is a forcing term, \(\mathcal{B}\) is a trace/flux boundary operator and \(g:\Gamma\to\mathbb{R}\) the corresponding prescribed value. We approximate the solution \(u\) as the realisation of a NN of a given architecture. We introduce some notations and recall the key principles of the PINN framework in the rest of this section. ### Neural networks Our solution ansatz is a fully-connected, feed-forward NN, which is obtained as the composition of affine maps and nonlinear activation functions applied element-wise. In this work, we separate the structure of a NN from its realisation when the activation function and the values of the weights and biases are given. We describe the architecture of a network by a tuple \((n_{0},\ldots n_{L})\in\mathbb{N}^{(L+1)}\), where \(L\) is the number of affine maps it is composed of, and for \(1\leq k\leq L\) the number of neurons on layer \(k\) is \(n_{k}\). We take \(n_{0}=d\) and since we only consider scalar-valued PDEs, we have \(n_{L}=1\). For each layer number \(1\leq k\leq L\), we write \(\boldsymbol{\Theta}_{k}:\mathbb{R}^{n_{k-1}}\to\mathbb{R}^{n_{k}}\) the affine map at layer \(k\), defined by \(\boldsymbol{\Theta}_{k}\boldsymbol{x}=\boldsymbol{W}_{k}\boldsymbol{x}+ \boldsymbol{b}_{k}\) for some weight matrix \(\boldsymbol{W}_{k}\in\mathbb{R}^{n_{k}\times n_{k-1}}\) and bias vector \(\boldsymbol{b}_{k}\in\mathbb{R}^{n_{k}}\). We apply the activation function \(\rho:\mathbb{R}\to\mathbb{R}\) after every affine map except for the last one so as not to constrain the range of the model. The expression of the network is then \[\mathcal{N}(\boldsymbol{\Theta},\rho)=\boldsymbol{\Theta}_{L}\circ\rho\circ \boldsymbol{\Theta}_{L-1}\circ\ldots\circ\rho\circ\boldsymbol{\Theta}_{1},\] where \(\boldsymbol{\Theta}\) stands for the collection of trainable parameters of the network \(\boldsymbol{W}_{k}\) and \(\boldsymbol{b}_{k}\). Although the activation functions could be different at each layer or even trainable, we apply the same, fixed activation function everywhere. ### Abstract setting for the loss function The loss function serves as a distance metric between the solution and the model. We write the loss in terms of the network itself to simplify notations, although it is to be understood as a function of its trainable parameters. Throughout this paper, we write \(\left\langle u,v\right\rangle_{\Omega}=\int_{\Omega}uv\,\mathrm{d}\Omega\) and \(\left\langle u,v\right\rangle_{\Gamma}=\int_{\Gamma}uv\,\mathrm{d}\Gamma\) the classical inner products on \(L^{2}(\Omega)\) and \(L^{2}(\Gamma)\). We write \(\left\|\cdot\right\|_{\Omega}\) and \(\left\|\cdot\right\|_{\Gamma}\) the corresponding \(L^{2}\) norms. #### 2.2.1. Strong formulation According to the original PINN formulation [1], the network is trained to minimise the strong residual of Eq. (1). The minimisation is performed in \(L^{2}\), so we suppose that \(\mathcal{D}u\) and \(f\) are in \(L^{2}(\Omega)\), and \(\mathcal{B}u\) and \(g\) are in \(L^{2}(\Gamma)\), This ensures that the residual \[\mathcal{R}(u)\doteq\frac{1}{2}\left\|\mathcal{D}u-f\right\|_{\Omega}^{2}+ \frac{\beta}{2}\left\|\mathcal{B}u-g\right\|_{\Gamma}^{2},\] is well-posed, where \(\beta>0\) is a weighting term for the boundary conditions. #### 2.2.2. Weak formulation The FEM derives a weak form of Eq. (1) that we can write in the following terms: \[\left\{\begin{array}{ll}\text{Find }u\in U\text{ such that }\\ \forall v\in V,&a(u,v)=\ell(v).\end{array}\right.\] In this formulation, \(a:U\times V\to\mathbb{R}\) is a bilinear form, \(\ell:V\to\mathbb{R}\) is a linear form and \(U\) and \(V\) are two functional spaces on \(\Omega\) such that \(a(u,v)\) and \(\ell(v)\) are defined for all \(u\in U\) and \(v\in V\). When the bilinear form \(a\) is symmetric, coercive and continuous in \(U\times V\), and \(U=V\), this weak setting can be recast as the minimisation of the energy functional \[\mathcal{J}(u)\doteq\frac{1}{2}a(u,u)-\ell(u)\] for \(u\in U\). In our case, \(U\) is the manifold of NNs with a given architecture defined on \(\Omega\). One difference with the FEM setting is that it is difficult to strongly enforce essential boundary conditions on NN spaces. Among other approaches, it is common to multiply the network by a function that vanishes on the boundary and add a lifting of the boundary condition [15]. Instead, we follow the penalty method and modify the residual: the boundary terms that come from integration by parts do not cancel and we suppose that it is possible to alter the bilinear form \(a\) into \(a_{\beta}\) and linear form \(\ell\) into \(\ell_{\beta}\) such that \(a_{\beta}\) is still symmetric and continuous. To ensure the coercivity of the bilinear form and weakly enforce Dirichlet boundary conditions, we add the term \(\beta\left\langle u,v\right\rangle_{\Omega}\) in the bilinear form and \(\beta\left\langle g,v\right\rangle_{\Gamma}\) in the linear form. Provided that the penalty coefficient \(\beta\) is large enough, \(a_{\beta}\) can be made coercive [16, p. 196] and we consider the energy minimisation problem \[\text{Find }u\in U\text{ such that for all }v\in U,\mathcal{J}_{\beta}(u) \doteq\frac{1}{2}a_{\beta}(u,u)-\ell_{\beta}(u)\leq\mathcal{J}_{\beta}(v). \tag{2}\] We see that the strong problem is a special case of the weak formulation. Indeed, by expanding the inner products and rearranging the terms, we obtain \[\mathcal{R}(u)=\frac{1}{2}\left(\left\langle\mathcal{D}u,\mathcal{D}u \right\rangle_{\Omega}+\beta\left\langle\mathcal{B}u,\mathcal{B}u\right\rangle _{\Gamma}\right)-\left(\left\langle\mathcal{D}u,f\right\rangle_{\Omega}+\beta \left\langle\mathcal{B}u,g\right\rangle_{\Gamma}\right)+C,\] where \(C\) is a constant with respect to \(u\) that we can disregard in the context of optimisation. If \(\mathcal{B}\) is a Dirichlet operator, this expression corresponds to the penalty method. After considering the functional derivative of \(\mathcal{R}\) and enforcing it to be zero, we recognise the notations of the weak form of problem Eq. (1), where the bilinear and linear form are defined as \(a_{\beta}(u,v)\doteq\left\langle\mathcal{D}u,\mathcal{D}v\right\rangle_{\Omega }+\beta\left\langle\mathcal{B}u,\mathcal{B}v\right\rangle_{\Gamma}\) and \(\ell_{\beta}(v)\doteq\left\langle\mathcal{D}v,f\right\rangle_{\Omega}+\beta \left\langle\mathcal{B}v,g\right\rangle_{\Gamma}\). In the remainder of this work, we keep the notations of the weak form as in Eq. (2) and simply write \(\mathcal{J}(u)\) the energy to be minimised. ### Evaluation of the loss function and optimisation The bilinear and linear forms are sums of integrals over the domain \(\Omega\) and its boundary \(\Gamma\). In the general case, these integrals have to be approximated because no analytical expressions can be obtained, or they are numerically intractable. This is why most PINN implementations and their variants approximate the loss function with MC integration. The integration points can be resampled at a given frequency during training to prevent overfitting or to place more points where the pointwise residual is larger [10]. In this work, we are precisely designing a new integration method to replace MC. The trainable parameters of the network are randomly initialised and a gradient descent algorithm can be used to minimise \(\mathcal{J}\) with respect to \(\boldsymbol{\Theta}\). The partial derivatives of \(u\) with respect to the space variable \(\boldsymbol{x}\), and the gradient of the loss function with respect to the parameters \(\boldsymbol{\Theta}\) are made available through automatic differentiation. ## 3. Regularisation of neural networks It was shown in [5] that NNs activated by the \(\mathrm{ReLU}\) activation function can emulate first-order Lagrange elements, thus advocating the use of \(\mathrm{ReLU}\) networks to approximate PDEs. In this section, we show that the convergence of CPWL networks is slow and noisy whenever the energy functional \(\mathcal{J}\) involves a spatial derivative of \(u\). We then provide a reason for preferring smoother activations and suggest a few alternatives for the \(\mathrm{ReLU}\) function. ### Limitations of CPWL activation functions When the activation is CPWL, the NN \(u\) is continuous and differentiable, and its partial derivatives are piecewise constant and discontinuous. Even though derivatives of order two and higher will be evaluated as zero, from a theoretical point of view however, this means that only the first-order derivatives of the network make pointwise sense, while derivatives of higher order do not. Let us consider an illustrative example to clarify the kind of problems that arise when the energy functional involves a spatial derivative of \(u\). We consider the Poisson problem \(u^{\prime\prime}=f^{\prime\prime}\) in \(\Omega\) with \(u=f\) on \(\Gamma\), which we reframe into the following variational problem using the Nitsche method \[a(u,v) =\left\langle u^{\prime},v^{\prime}\right\rangle_{\Omega}-\left\langle u ^{\prime}\cdot n,v\right\rangle_{\Gamma}-\left\langle u,v^{\prime}\cdot n \right\rangle_{\Gamma}+\beta\left\langle u,v\right\rangle_{\Gamma}\] \[\ell(v) =\left\langle-f^{\prime\prime},v\right\rangle_{\Omega}-\left\langle f,v^{\prime}\cdot n\right\rangle_{\Gamma}+\beta\left\langle f,v\right\rangle_{ \Gamma}.\] Here \(n\) stands for the outward-pointing unit normal to \(\Gamma\). We consider the manufactured solution \(f=\mathrm{ReLU}\) on \(\Omega=[-1,+1]\). In this case \(f^{\prime\prime}=\delta_{0}\) is the Dirac delta centered at zero, which makes the first integral in \(\ell(v)\) equal to \(-v(0)\). We consider a one-layer network \(u(x)=f(x+c)\) to approximate the solution to this problem, and we wish to recover \(c=0\). The energy functional has the following expression: \[\mathcal{J}(c)=\frac{1}{2}\left\{\begin{array}{ll}0&\text{if }c<-1\\ \beta c^{2}+|c|-(\beta-1)&\text{if }-1\leq c\leq+1\\ 2\beta c^{2}-2(\beta-1)c&\text{if }c>+1\end{array}\right.\] We verify that \(\mathcal{J}\) shows a discontinuity at \(-1\). Besides, \(\mathcal{J}\) is decreasing on \([-1,0]\) and increasing on \([0,+1]\). We compute \(\mathcal{J}(0)=-\frac{1}{2}(\beta-1)\), which is negative whenever \(\beta>1\). This proves that \(\mathcal{J}\) has a global minimum at \(c=0\). However, the gradient of \(\mathcal{J}\) is discontinuous at \(0\), being equal to \(-1/2\) on the left and \(+1/2\) on the right. Because \(\mathrm{ReLU}\) is not differentiable at zero, any gradient-based optimiser will suffer from oscillations, implying slow and noisy convergence. This example only involved one unknown, but one would already have to choose a low learning rate and apply a decay on the learning rate to make the network converge to an acceptable solution. One can imagine that when the optimisation problem involves a whole network, these oscillations can interfere with and severely hinder convergence. Ensuring that the activation function is at least \(\mathcal{C}^{1}\) enables one to interchange the derivative and integral signs, and if it is \(\mathcal{C}^{2}\) then the gradient of the loss function is continuous. ### Regularisation of the activation function The regularity of a NN is entirely dictated by that of its activation function. In order to make the energy functional smoother while keeping the motivation of CPWL, our idea is to approximate the CPWL function by a smoother equivalent and replace every occurrence of the CPWL activation by its smoother counterpart in the network. Our approach is similar to the "canonical smoothings" of \(\mathrm{ReLU}\) networks introduced in [17]. We take \(\mathrm{ReLU}\) as an example and point out several families of smooth equivalents. In each case, we introduce the normalised variable \(\bar{x}=x/\gamma\), where \(\gamma\) is a constant that only depends on the choice of the family. * If we rewrite \(\mathrm{ReLU}(x)=\frac{1}{2}(x+|x|)\), we see that the lack of regularity comes from the absolute value being non-differentiable at zero. We can thus find a smooth equivalent of the absolute value and replace it in this expression. We set \(\gamma=2\varepsilon\) and define \(\rho_{\varepsilon}\) as \[\rho_{\varepsilon}(x)=\frac{1}{2}\left(x+\gamma\sqrt{1+\bar{x}^{2}}\right).\] * The first derivative of \(\mathrm{ReLU}\) is discontinuous at zero. We can replace it with any continuous sigmoid function and obtain a smoothing of \(\mathrm{ReLU}\) by integrating it. We set \(\gamma=\frac{2}{\ln 2}\varepsilon\) and define \(\rho_{\varepsilon}\) as \[\rho_{\varepsilon}(x)=\frac{1}{2}\left(x+\gamma\ln(2\cosh(\bar{x}))\right).\] * The second derivative of \(\mathrm{ReLU}\) is the Dirac delta at zero. We can draw from the well-established theory of mollifiers to obtain a smooth Dirac and integrate it twice to build a regularised version of \(\mathrm{ReLU}\). We set \(\gamma=2\sqrt{\pi}\varepsilon\) and define \(\rho_{\varepsilon}\) as \[\rho_{\varepsilon}(x)=\frac{1}{2}\left(x+x\,\mathrm{erf}(\bar{x})+\frac{ \gamma}{\sqrt{\pi}}\exp(-\bar{x}^{2})\right).\] The following properties hold for the three classes of smooth equivalents of \(\mathrm{ReLU}\) introduced above and all values of \(\varepsilon>0\). The function \(\rho_{\varepsilon}\) is \(\mathcal{C}^{\infty}(\mathbb{R})\), convex and monotonically increasing in \(\mathbb{R}\). The graph of \(\rho_{\varepsilon}\) lies above that of \(\mathrm{ReLU}\). Similarly to \(\mathrm{ReLU}\), \(\rho_{\varepsilon}\) satisfies the property \(\rho_{\varepsilon}(x)-\rho_{\varepsilon}(-x)=x\). Finally, the parameter \(\varepsilon\) controls the pointwise distance between \(\rho_{\varepsilon}\) and \(\mathrm{ReLU}\), in the sense that \(\|\rho_{\varepsilon}-\rho\|_{L^{\infty}(\mathbb{R})}=\rho_{\varepsilon}(0)=\varepsilon\). In particular, Lemma 2.5 in [17] applies to all the candidates above, and we refer to Proposition 2.6 in the same article for a bound of \(\|\mathcal{N}(\boldsymbol{\Theta},\mathrm{ReLU})-\mathcal{N}(\boldsymbol{ \Theta},\rho_{\varepsilon})\|_{L^{\infty}(\Omega)}\) in terms of \(\|\mathrm{ReLU}-\rho_{\varepsilon}\|_{L^{\infty}(\mathbb{R})}\). The computational cost of these functions must also be taken into account and for this reason we choose to use the family obtained by the first method. We write it \(\mathrm{ReLU}_{\varepsilon}\) in the rest of this work. Figure 1. Examples of regularising families for the \(\mathrm{ReLU}\) function and corresponding first and second derivatives. ## 4. Global approximation of an activation function by a CPWL function In this section, we suppose that we are given a smooth activation \(\rho\) and we consider the NN \(u=\mathcal{N}(\boldsymbol{\Theta},\rho)\). We draw inspiration from the FEM and we would like to construct a mesh of the domain such that the behaviour of the NN is close to linear on each cell of the mesh. We design a method that approximates the smooth activation function \(\rho\) by a CPWL function \(\pi[\rho]\) and bound the distance between the two functions. ### Assumptions on the activation function Since we have no prior bound on the parameters of the network, we need the approximation of \(\rho\) by \(\pi[\rho]\) to hold on \(\mathbb{R}\) as a whole. We impose that \(\pi[\rho]-\rho\) is square integrable on \(\mathbb{R}\), which implies that \(\rho\) has a linear behaviour at infinity. If \(\rho\) is twice differentiable on \(\mathbb{R}\), we know that \(\pi[\rho]-\rho\) behaves as \(x^{2}f^{\prime\prime}(x)\) at infinity. As a consequence, one way to enforce the integrability condition is to make sure that \(x^{2}f^{\prime\prime}(x)\) is of the order of \(|x|^{-\alpha}\) for a given \(\alpha>1/2\), which is equivalent to \(|x|^{\beta}f^{\prime\prime}(x)\) being bounded on \(\mathbb{R}\) for a given \(\beta>5/2\). As will be made clear in the proof of Lem. 1, we also need \(f\) to be analytical in the neighbourhood of the zeros of \(f^{\prime\prime}\). Altogether, we restrict our approach to functions that belong to the functional space \(\mathcal{A}\) defined as \[\mathcal{A}=\left\{\begin{array}{l}f\in\mathcal{C}^{2}(\mathbb{R}),|x|^{ \alpha}\,f^{\prime\prime}(x)\text{ is bounded for a given }\alpha>5/2,\\ f^{\prime\prime}(x)=0\implies f\text{ is analytical in a neighbourhood of }x\end{array}\right\}.\] We point out that for all \(f\in\mathcal{A}\), \(\pi[f]-f\in L^{p}(\mathbb{R})\) for all \(p\geq 2\) and \(f^{\prime\prime}\in L^{p}(\mathbb{R})\) for all \(p\geq 2/5\). ### Approximation space In our experiments, we observed that the way we approximate the activation function by a CPWL counterpart plays an important role in the performance of our adaptive quadrature. There are existing algorithms and heuristics to build \(\pi[\rho]\)[18] but we found that \(\pi[\rho]\) needs to satisfy extra properties for our method to be most efficient in practice. In particular, we noticed that it is preferable that the CPWL approximation of \(\rho\) lie below \(\rho\) in the regions where \(\rho\) is convex, and above \(\rho\) where \(\rho\) is concave. To that aim, we wish to decompose \(\mathbb{R}\) into intervals where \(\rho\) is either strictly convex, linear or strictly concave. We can then approximate \(\rho\) in each interval by connecting the tangents to \(\rho\) at free points. More formally, we define the partition induced by \(f\) as follows. **Definition 1** (Partition induced by a function).: _Let \(f\in\mathcal{A}\). We define the set \(Z_{0}(f)=\{I\subset\mathbb{R},f^{\prime\prime}_{|I}=0\}\) that contains the intervals (possibly isolated points) where \(f^{\prime\prime}\) is zero and \(Z(f)=\partial Z_{0}(f)\cup\{-\infty,+\infty\}\) the set of points where \(f^{\prime\prime}\) is zero, excluding intervals but including \(\pm\infty\). We write \(z(f)=|Z(f)|\) and \(\xi_{1}<\ldots<\xi_{z(f)}\) the ordered elements of \(Z(f)\). The partition induced by \(f\) is defined as \(P(f)=\{[\xi_{i},\xi_{i+1}],1\leq i\leq z(f)-1\}\), that is the collection of intervals whose ends are two consecutive elements of \(Z(f)\). The elements of \(P(f)\) are called compatible intervals for \(f\). We finally introduce \(1\leq\kappa(f)\leq z(f)-1\) the number of segments in \(P(f)\) where \(f\) is not linear._ By way of example, the second derivative of \(\mathrm{ReLU}_{\varepsilon}\) is always positive so \(Z(\mathrm{ReLU}_{\varepsilon})=\varnothing\) and \(P(\mathrm{ReLU}_{\varepsilon})=\{\mathbb{R}\}\). However, \(Z(\tanh)=\{0\}\) and \(P(\tanh)=\{\mathbb{R}_{-},\mathbb{R}_{+}\}\). For \(x\in\mathbb{R}\), we write \(T[f,x]\) the tangent to \(f\) at \(x\) and we extend this notation to \(x=\pm\infty\) by considering the asymptotic tangents to \(f\). From the construction of \(P(f)\), it is easy to see that for all \(I\in P(f)\) and \(x<y\in I\), the two tangents \(T[f,x]\) and \(T[f,y]\) intersect at a unique \(z\in|x,y[\). We write \(T[f,x,y]\) the CPWL function defined on \([x,y]\) as \(T[f,x]\) on \([x,z]\) and \(T[f,y]\) on \([z,y]\). We are now ready to define our approximation space. For \(n\geq z(f)\), we define the space \(\mathcal{A}_{n}(f)\) of CPWL functions that connect the tangents to \(f\) at \(Z(f)\) as well as at \(n-z(f)\) other free points that do not belong to \(Z(f)\). As an example, the function plotted in Fig. (a)a belongs to \(\mathcal{A}_{7}(\tanh)\). There are four free points because \(z(\tanh)=3\). The square points are fixed and they are located at \(\pm\infty\) and zero. ### Convergence rate For \(f\in\mathcal{A}\), \(n\geq z(f)\) and \(p\geq 2\), we write \(\pi_{n,p}[f]\) the best approximation of \(f\) in \(\mathcal{A}_{n}(f)\) in the \(L^{p}\) norm. In this paragraph, we bound the distance between \(f\) and \(\pi_{n,p}[f]\), and determine the convergence rate of this approximation. We start by finding an upper bound for the distance between \(f\) and \(T[f,x,y]\) for \(x<y\in I\), where \(I\) is a compatible interval for \(f\). **Lemma 1**.: _Let \(f\in\mathcal{A}\) and \(p\in[2,\infty[\). There exist constants \(A_{p}(f)\), \(A_{\infty}(f)\), \(B_{p}(f)\) and \(B_{\infty}(f)\) such that for all interval \(I\) compatible for \(f\) and \(J=[x,y]\subset I\), it holds_ \[\left\|f-T[f,x,y]\right\|_{L^{p}(J)}\leq A_{p}(f)\left\|f^{\prime \prime}\right\|_{L^{p/(2p+1)}(J)}, \left\|f-T[f,x,y]\right\|_{L^{\infty}(J)}\leq A_{\infty}(f)\left\|f^{\prime \prime}\right\|_{L^{1/2}(J)}, \tag{4}\] \[\left\|f^{\prime}-T[f,x,y]^{\prime}\right\|_{L^{p}(J)}\leq B_{p}(f) \left\|f^{\prime\prime}\right\|_{L^{p/(p+1)}(J)}, \left\|f^{\prime}-T[f,x,y]^{\prime}\right\|_{L^{\infty}(J)}\leq B_{\infty}(f) \left\|f^{\prime\prime}\right\|_{L^{1}(J)}. \tag{3}\] Thanks to this lemma, we can bound the distance between \(f\) and \(\pi_{n,p}[f]\) and prove a quadratic convergence for all \(p\). **Proposition 1** (Best approximation on \(\mathcal{A}_{n}\)).: _Let \(f\in\mathcal{A}\), \(n\geq z(f)\) and \(p\in[2,\infty[\). Using the constants from Lem. 1, it holds_ \[\left\|f-\pi_{n,p}[f]\right\|_{L^{p}(\mathbb{R})}\leq\frac{A_{p} (f)\kappa(f)^{(2p+1)/p}\left\|f^{\prime\prime}\right\|_{L^{p/(2\pi+1)}(\mathbb{ R})}}{n^{2}}, \left\|f-\pi_{n,p}[f]\right\|_{L^{\infty}(\mathbb{R})}\leq\frac{A_{\infty}(f) \kappa(f)^{2}\left\|f^{\prime\prime}\right\|_{L^{1/2}(\mathbb{R})}}{n^{2}},\] \[\left\|f^{\prime}-\pi_{n,p}[f]^{\prime}\right\|_{L^{p}(\mathbb{R })}\leq\frac{B_{p}(f)\kappa(f)^{(p+1)/p}\left\|f^{\prime\prime}\right\|_{L^{p/ (p+1)}(\mathbb{R})}}{n}, \left\|f^{\prime}-\pi_{n,p}[f]^{\prime}\right\|_{L^{\infty}( \mathbb{R})}\leq\frac{B_{\infty}(f)\kappa(f)\left\|f^{\prime\prime}\right\|_{ L^{1}(\mathbb{R})}}{n}.\] ### Numerical implementation In practice, we choose \(p=2\) and implement the least-square approximation of \(\rho\) on \(\mathcal{A}_{n}(\rho)\). We write \(\pi_{n}\) instead of \(\pi_{n,2}\). Since we are searching for the best approximant within \(\mathcal{A}_{n}[\rho]\), the unknowns to be determined are the locations \((\xi_{i})_{1\leq i\leq n}\) of the tangents. We write the total \(L^{2}\) norm between \(\rho\) and \(\rho_{n}\) in terms of \(\xi_{i}\) and seek to minimise this quantity via a gradient descent algorithm. We are interested in two activation functions: \(\mathrm{ReLU}_{\varepsilon}\) and \(\tanh\). Both functions are symmetric around the origin, which allows us to restrict the optimisation domain. We also verify that both functions belong to \(\mathcal{A}\). * The relationship \(\mathrm{ReLU}_{\varepsilon}(x)-\mathrm{ReLU}_{\varepsilon}(-x)=x\) holds for all \(x\in\mathbb{R}\). We impose the same condition on \(\pi_{n}[\mathrm{ReLU}_{\varepsilon}]\) and verify that \(\mathrm{ReLU}_{\varepsilon}-\pi_{n}[\mathrm{ReLU}_{\varepsilon}]\) is even. Since \(\mathrm{ReLU}_{\varepsilon}\) takes bounded values on \(\mathbb{R}_{-}\), we approximate \(\mathrm{ReLU}_{\varepsilon}\) by \(\pi_{n}[\mathrm{ReLU}_{\varepsilon}]\) on \(\mathbb{R}_{-}\) only and recover \(\pi_{n}[\mathrm{ReLU}_{\varepsilon}]\) as a whole by symmetry. The second derivative of \(\mathrm{ReLU}_{\varepsilon}\) is \(x\mapsto\frac{1}{2\gamma}(1+\overline{x}^{2})^{-3/2}\). It follows a polynomial decal rate with \(\alpha=3>5/2\). We verify that the second derivative of \(\mathrm{ReLU}_{\varepsilon}\) is never zero. The asymptotic tangents to \(\mathrm{ReLU}_{\varepsilon}\) intersect, so we can approximate \(\mathrm{ReLU}_{\varepsilon}\) with \(2\) pieces and this approximation is \(\mathrm{ReLU}\). * The \(\tanh\) function is odd, so imposing that \(\pi_{n}[\tanh]\) is odd enforces that \(\tanh-\pi_{n}[\tanh]\) is even. We decide to perform the minimisation on \(\mathbb{R}_{+}\) and obtain an approximation of \(\tanh\) on \(\mathbb{R}\) as a whole by symmetry. The second derivative of \(\tanh\) is \(x\mapsto-2\tanh(x)(1-\tanh(x)^{2})\). This is asymptotically equivalent to \(-8\exp(-2x)\). In particular \(x\mapsto x^{3}\tanh^{\prime\prime}(x)\) is bounded on \(\mathbb{R}\). Moreover, \(\tanh^{\prime\prime}(x)=0\) if and only if \(x=0\), and \(\tanh\) is analytic in a neighbourhood of zero (its Taylor series converges within a radius of \(\pi/2\)). Since the asymptotic tangents to \(\tanh\) do not intersect, the minimum number of pieces to approximate \(\tanh\) on \(\mathbb{R}\) is three and the corresponding CPWL function is known as the hard \(\tanh\) activation. As an example, we plot \(\pi_{7}[\tanh]\) in Fig. 1(a). The convergence plot in \(L^{2}\) norm of our method applied to both functions is shown in Fig. 1(b), together with the theoretical convergence rate of \(-2\). We verify that the convergence is asymptotically quadratic. We notice that the approximation of \(\mathrm{ReLU}_{\varepsilon}\) shows a preasymptotic behaviour in which the effective convergence rate is closer to \(1.5\). ## 5. Adaptive meshes and quadratures for neural networks Let \(u=\mathcal{N}(\mathbf{\Theta},\rho)\) and \(\pi[u]=\mathcal{N}(\mathbf{\Theta},\pi[\rho])\) be two NNs sharing the same weights, activated by a smooth function and a CPWL approximation of it, obtained with the method explained above. In practice \(\pi[\rho]\) will be Figure 2. Visualisation of \(\pi_{7}(\tanh)\) and convergence plot of our proposed method in \(L^{2}\) norm for \(\mathrm{ReLU}_{\varepsilon}\) and \(\tanh\). On (a), the square pink points are fixed, while the circle green points are chosen so as to minimise the \(L^{2}\) norm of \(\tanh-\pi_{7}[\tanh]\). fixed, so we drop the subscript \(n\) in this section. We are interested in recovering a maximal decomposition of \(\Omega\) (resp. \(\Gamma\)) such that \(\pi[u]\) is linear in these regions. We say that this decomposition is a mesh of \(\Omega\) (resp. \(\Gamma\)) adapted to \(\pi[u]\). Since \(\pi[\rho]\) is fixed, this mesh depends only on \(\boldsymbol{\Theta}\). We introduce the notation \(\tau_{\Omega}(\boldsymbol{\Theta})\) and \(\tau_{\Gamma}(\boldsymbol{\Theta})\) to refer to these meshes, and \(\tau(\boldsymbol{\Theta})\) to refer to either of the two meshes. We equip each cell with a Gaussian quadrature to evaluate the loss function. Bounds for the integration error of our Adaptive Quadrature (AQ) are provided at the end of this section. ### Convexity of the mesh We observe that each neuron corresponds to a scalar-valued linear map followed by the composition by \(\pi[\rho]\). The breakpoints of \(\pi[\rho]\) define hyperplanes in the input space of each neuron. Indeed, the composition of a linear map \(\boldsymbol{x}\to\boldsymbol{W}\boldsymbol{x}+\boldsymbol{b}\) by \(\pi[u]\) is CPWL and the boundaries of the regions where the composition is linear correspond to the equations \(\boldsymbol{W}\boldsymbol{x}+\boldsymbol{b}=\xi\), where \(\xi\) are the breakpoints of \(\pi[\rho]\). Intuitively, we can obtain the adapted meshes \(\tau(\boldsymbol{\Theta})\) by considering the hyperplanes attached to every neuron of the network. The cells of the mesh are the largest sets of points that lie within exactly one piece of \(\pi[\rho]\) at each neuron. This is why these regions are also called the "activation patterns" of a NN, as we can label them by the pieces they belong to at each neuron [19]. The implementation of an algorithm that, given the parameters \(\boldsymbol{\Theta}\) of a NN, the linearisation \(\pi[\rho]\) and the input space \(\Omega\), outputs \(\tau_{\Omega}(\boldsymbol{\Theta})\) has to be robust to a large variety of corner cases, as in practice the cells can be of extremely small measures, or take skewed shapes. Besides, we are concerned with the complexity of this algorithm as it is meant to be run at every iteration (or at a given frequency) during the training phase. Fortunately enough, the only cells in the mesh that may not be convex must be in contact with the boundary. **Lemma 2** (Convexity of the mesh).: _Let \(u:\mathbb{R}^{d}\to\mathbb{R}\) be a NN with weights \(\boldsymbol{\Theta}\) activated by a CPWL function and \(\Omega\subset\mathbb{R}^{d}\). If a cell in \(\tau_{\Omega}(\boldsymbol{\Theta})\) is not convex, it has to intersect with the boundary of \(\Omega\). In particular, if \(\Omega\) is convex, all the cells of \(\tau_{\Omega}(\boldsymbol{\Theta})\) are convex._ If \(\Omega\) is not convex, we decompose \(\Omega\) into a set of convex polytopes and construct an adapted mesh for each polytope independently. By Lem. 2 these submeshes are convex. The mesh composed of all the cells of the submeshes is adapted to \(u\) on \(\Omega\) and contains convex cells only. Thus without loss of generality, we suppose that \(\Omega\) is convex. Clipping a line by a polygon is made easier when the polygon is convex. Our method to build \(\tau_{\Omega}(\boldsymbol{\Theta})\) takes great advantage of the convexity of the mesh. Our algorithm is described in Alg. 1. ### Representation of a linear region Let \(u_{k}\) denote the composition of the first \(k\) layers of the network \(u\), and \(\tau^{k}(\boldsymbol{\Theta})\) be the mesh adapted to \(u_{k}\). We represent a linear region of \(\tau^{k}(\boldsymbol{\Theta})\) and its corresponding local expression by \((R,\boldsymbol{W}_{R},\boldsymbol{b}_{R})\), where \(R\in\tau^{k}(\boldsymbol{\Theta})\) is a cell of the mesh, \(\boldsymbol{W}_{R}\in\mathbb{R}^{n_{k}\times d}\) and \(\boldsymbol{b}_{R}\in\mathbb{R}^{n_{k}}\) are the coefficients of the restriction of \(\pi[u_{k}]\) to \(R\): \(\pi[u_{k}]_{|R}(\boldsymbol{x})=\boldsymbol{W}_{R}\boldsymbol{x}+\boldsymbol {b}_{R}\). ### Initialisation of the algorithm We initialise the algorithm with the region \((\Omega,\boldsymbol{I}_{d},\boldsymbol{0}_{d})\), where \(\boldsymbol{I}_{d}\) is the identity matrix of size \(d\) and \(\boldsymbol{0}_{d}\) is the zero vector of size \(d\). The mesh \(\{(\Omega,\boldsymbol{I}_{d},\boldsymbol{0}_{d})\}\) is adapted to \(u_{0}\). ### Composition by a linear map Suppose that \((R,\boldsymbol{W}_{R},\boldsymbol{b}_{R})\) is adapted to \(u_{k}\). The composition of \(\pi[u_{k}]_{|R}\) by the linear map \(\boldsymbol{\Theta}_{k+1}\) remains linear on \(R\), only the coefficients of the map are changed. They become \(\boldsymbol{W}_{k+1}\boldsymbol{W}_{R}\) and \(\boldsymbol{W}_{k+1}\boldsymbol{b}_{R}+\boldsymbol{b}_{k+1}\). ### Composition by an activation function Suppose that \((R,\boldsymbol{W}_{R},\boldsymbol{b}_{R})\) is adapted to \(u_{k}\). We compose \(\pi[u_{k}]_{|R}\) by \(\pi[\rho]\) componentwise. Let \(\boldsymbol{w}_{i}\) be the \(i\)-th row of \(\boldsymbol{W}_{R}\) and \(b_{i}\) be the \(i\)-th coordinate of \(\boldsymbol{b}_{R}\). We need to find the pre-images of the breakpoints of \(\pi[\rho]\), which amounts to consider the hyperplanes \(\boldsymbol{w}_{i}\boldsymbol{x}+b_{i}=\xi_{j}\), where \(\xi_{j}\) are the breakpoints of \(\pi[\rho]\). We explain our method in detail in Apdx. B.1. The process of mesh extraction in dimension two is illustrated in Fig. 3. In Fig. 3, the dashed line represents the orientation of the activation function. The plain lines depict two hyperplanes corresponding to two different breakpoints of \(\pi[\rho]\). Fig. 3 pictures the hyperplanes resulting from several neurons, thus oriented in different directions. In this example, there would be at least four neurons on the layer at hand. We notice that since the ranges of the neurons are different, all neurons may not activate all the pieces of \(\pi[\rho]\), and thus they can give rise to different numbers of hyperplanes. The clipping operation is shown in Fig. 3, and the intersections of the hyperplanes against themselves are displayed in Fig. 3. In this example, the initial cell would be cut into \(13\) subcells. ### Gaussian quadratures for convex polygons We decompose the integrals in the linear and bilinear forms on the cells of \(\tau_{\Omega}(\mathbf{\Theta})\) and \(\tau_{\Gamma}(\mathbf{\Theta})\). In these cells, the terms that only depend on the network \(\pi[u]\) and its spatial derivatives are polynomials. As a consequence, the linear and bilinear forms involving \(\pi[u]\) can be computed exactly using Gaussian quadratures on segments in dimension one, and polygons in dimension two. In dimension one, the cells of the mesh are segments. Gaussian quadrature rules are known and made available through plenty of libraries. However, in dimension two, the cells of the mesh can be arbitrary convex polygons. To handle the general case, one approach could consist in splitting each convex cell into triangles and then applying a known Gaussian quadrature for triangles. This approach is the least economical in terms of the number of quadrature points. At the opposite end of the spectrum, we could rely on Gaussian quadratures of reference \(n\)-gons. Still, the order of the mapping from the reference \(n\)-gon to the \(n\)-gon at hand is \(n-2\), which makes the Jacobian of this mapping costly to evaluate. In this work, we strike a tradeoff and decompose each convex cell into a collection of triangles and convex quadrangles. We use a recursive algorithm to obtain this decomposition, as explained in Apdx. B.2. We refer to [20] for numerically accurate symmetric quadrature rules for triangles and quadrangles. ### Alternatives There exist computationally cheaper numerical integration methods based on evaluations of the integrands at the vertices of the mesh. Indeed, using Stokes theorem one can transform surface integrals on polygons into line integrals on their edges, and in turn, into pointwise evaluations at their vertices [21]. This approach requires knowing the local expression of \(\pi[u]\), that is the coefficients of the linear interpolation of \(u\) on each cell. We have conducted preliminary experiments using this approach but we have observed that in addition to being numerically unstable, the overall cost including the interpolation step is not lower. Indeed, finding the best-fitting plane that passes through given points involves the inversion of a \(3\times 3\) system on each cell. The coefficients of these matrices are the integrals over each cell of the polynomials \(x^{2}\), \(xy\), \(y^{2}\), \(x\), \(y\) and \(1\). In many cases, the cells take skewed shapes so these matrices can be extremely ill-conditioned. ### Analysis of the integration error Our proposed adaptive quadrature consists in approximating \(\mathcal{J}(u)\) by a quadrature on the cells of the mesh adapted to \(\pi[u]\). Conceptually, this is equivalent to choosing a suitable piecewise polynomial function defined on the mesh adapted to \(\pi[u]\), and approximating \(\mathcal{J}(u)\) by \(\mathcal{J}_{\text{num}}(v)\). Here, \(\mathcal{J}_{\text{num}}\) is a discrete version of \(\mathcal{J}\) in which the integrals have been replaced by numerical quadratures. We break down the numerical integration error into two parts as follows \[\left|\mathcal{J}(u)-\mathcal{J}_{\text{num}}(v)\right|\leq\left|\mathcal{J}( u)-\mathcal{J}(v)\right|+\left|\mathcal{J}(v)-\mathcal{J}_{\text{num}}(v) \right|.\] The second term is the numerical quadrature error and is commonplace in the field of the FEM. In particular, one can consider piecewise polynomial approximations of physical parameters, forcing term, and boundary conditions up to a given order and use a Gaussian quadrature that cancels the numerical quadrature error of the energy functional. The first term is the error incurred by the piecewise polynomial approximation of \(u\) by \(v\). Since we know that \(\mathcal{J}\) is a variational energy functional that comes from bounded linear and bilinear forms, we can bound the linearisation error in the following way \[\left|\mathcal{J}(u)-\mathcal{J}(v)\right|\leq\frac{1}{2}\left|a(u-v,u+v) \right|+\left|\ell(u-v)\right|\leq\frac{1}{2}C_{a}\left\|u-v\right\|_{a}\left\| u+v\right\|_{a}+C_{\ell}\left\|u-v\right\|_{\ell},\] Figure 3. Example of a mesh extraction. (a) Parallel hyperplanes associated with different breakpoints. (b) Hyperplanes linked to a whole layer. (c) Clipping hyperplanes with region boundary. (d) Intersection of hyperplanes. where \(C_{a}\) and \(C_{\ell}\) are the constants that bound \(a\) and \(\ell\), and \(\left\lVert\cdot\right\rVert_{a}\) and \(\left\lVert\cdot\right\rVert_{\ell}\) are integral norms. For instance, when we consider a strong formulation of a weak Poisson problem, \(\left\lVert\cdot\right\rVert_{a}\) is an \(H^{2}\) norm and \(\left\lVert\cdot\right\rVert_{\ell}\) is an \(L^{2}\) norm. In the case of a weak formulation, \(\left\lVert\cdot\right\rVert_{a}\) and \(\left\lVert\cdot\right\rVert_{\ell}\) are \(H^{1}\) norms. #### 5.8.1. Weak formulations In Prop. 1, we showed that the \(L^{\infty}\) norm of \(\rho-\pi_{n}[\rho]\) decays as \(n^{-2}\) and that of \(\rho^{\prime}-\pi_{n}[\rho]^{\prime}\) decays as \(n^{-1}\). We consider the two networks \(u=\mathcal{N}(\boldsymbol{\Theta},\rho)\) and \(\pi_{n}[u]=\mathcal{N}(\boldsymbol{\Theta},\pi_{n}[\rho])\), that only differ by their activation function. We show that we can bound \(u-\pi_{n}[u]\) in terms of \(\rho-\pi_{n}[\rho]\) in the following proposition. We define the \(L^{\infty}\) norm of a vector-valued function as the maximum \(L^{\infty}\) norm of each of its components. **Proposition 2** (Bounds for the difference of neural networks).: _Let \(\Omega\subset\mathbb{R}^{d}\) be a bounded domain, \(f\), \(g\) two continuous and almost everywhere differentiable functions such that \(f-g\), \(f^{\prime}\) and \(g^{\prime}\) are bounded on \(\mathbb{R}\). We further assume that \(f\) and \(f^{\prime}\) are Lipschitz continuous. Let \(u=\mathcal{N}(\boldsymbol{\Theta},f)\) and \(v=\mathcal{N}(\boldsymbol{\Theta},g)\) two NNs that only differ by their activation functions. Then there exists three constants \(C_{1}\), \(C_{2}\), \(C_{3}\) such that_ \[\left\lVert u-v\right\rVert_{L^{\infty}(\Omega)} \leq C_{1}\left\lVert f-g\right\rVert_{L^{\infty}(\mathbb{R})}, \tag{6}\] \[\left\lVert\nabla u-\nabla v\right\rVert_{L^{\infty}(\Omega)} \leq C_{2}\left\lVert f-g\right\rVert_{L^{\infty}(\mathbb{R})}+C _{3}\left\lVert f^{\prime}-g^{\prime}\right\rVert_{L^{\infty}(\mathbb{R})}. \tag{5}\] _The three constants depend on \(\boldsymbol{\Theta}\) and the Lipschitz constant of \(f\). Additionally \(C_{2}\) and \(C_{3}\) depend on \(\left\lVert f^{\prime}\right\rVert_{L^{\infty}(\mathbb{R})}\), \(\left\lVert g^{\prime}\right\rVert_{L^{\infty}(\mathbb{R})}\) and the Lipschitz constant of \(f^{\prime}\)._ The proposition above can be used to bound \(\left\lVert u-\pi[u]\right\rVert_{a}\) and \(\left\lVert u-\pi[u]\right\rVert_{\ell}\) whenever their integrands can be expressed as bivariate polynomials involving \(u\) and \(\nabla u\). Indeed, for all \(n\geq 1\), \(\rho\) and \(\pi_{n}[\rho]\) are continuous and almost everywhere differentiable and their derivatives are bounded. Furthermore Prop. 2 showed that \(\rho-\pi_{n}[\rho]\) and \(\rho^{\prime}-\pi_{n}[\rho]^{\prime}\) are bounded. Since \(\rho\in\mathcal{A}\), we infer that \(\rho^{\prime}\) and \(\rho^{\prime\prime}\) are bounded, which implies that \(\rho\) and \(\rho^{\prime}\) are Lipschitz continuous. Thus Prop. 2 can be applied and we have \[\left\lVert u-\pi_{n}[u]\right\rVert_{L^{p}(\Omega)} \leq\frac{C\left\lvert\Omega\right\rvert^{1/p}}{n^{2}}, \left\lVert\nabla u-\nabla\pi_{n}[u]\right\rVert_{L^{p}(\Omega)} \leq\frac{D\left\lvert\Omega\right\rvert^{1/p}}{n},\] where \(C=C_{1}A_{\infty}(\rho)\kappa(\rho)^{2}\left\lVert\rho^{\prime\prime}\right\rVert _{L^{1/2}(\mathbb{R})}\) and \(D=C_{2}A_{\infty}(\rho)\kappa(\rho)\left\lVert\rho^{\prime\prime}\right\rVert _{L^{1/2}(\mathbb{R})}+C_{3}B_{\infty}(\rho)\kappa(\rho)\left\lVert\rho^{ \prime\prime}\right\rVert_{L^{1}(\mathbb{R})}\). We used the fact that \(n\geq\kappa(\rho)\) to bound \(\kappa(\rho)/n\) in \(D\). We conclude that \(\left\lVert u-\pi_{n}[u]\right\rVert_{H^{1}(\Omega)}\) decays as \(n^{-1}\). #### 5.8.2. Strong formulations Since \(\pi_{n}[u]\) is only CPWL, the \(H^{2}\) norm of \(u-\pi_{n}[u]\) is not defined, which means that we cannot use the same approach as above to bound the integration error. Instead one could apply a similar method to that used in the FEM, where \(u\) is approximated by a piecewise polynomial function that can be integrated exactly. The integration error would involve the mesh size. ## 6. Numerical experiments We carry out thorough numerical experiments to validate our integration method. Choosing the best hyperparameters to train a NN usually requires trying several combinations and picking the one that leads to the lowest generalisation error. In addition to the usual learning rate and the number of epochs, we also need to tune the design of the loss function by specifying the penalty coefficient \(\beta\), and the integration method. In the case of MC integration, we select the number of points in the domain and on the boundary, as well as the resampling frequency. Our adaptive quadrature introduces a new set of hyperparameters, namely the number of pieces in the CPWL approximation of the activation function, the order of the quadrature in each cell as well as the remeshing frequency. In this work, we are also concerned with reducing the computational budget involved with the training of a NN to solve a PDE. For this reason, we consider a two-hidden layer feed-forward NN with \(10\) neurons on both layers. The number of trainable parameters is \(141\) and \(151\) in dimensions one and two respectively. The optimiser is chosen as ADAM and all the networks are trained for \(5000\) epochs. To compensate for the low number of training iterations, we use a slightly higher learning rate compared to common practice in the literature. We set it to \(10^{-2}\) or \(10^{-3}\), whereas it seems to usually lie between \(10^{-4}\) and \(10^{-3}\). We specify the learning rate for each experiment in the sections below. In our experiments, we compare models that have been trained with the same average number of integration points. In the case of a network trained with MC, the number of integration points is fixed during training, but this is not the case for the method we propose here. In practice, we observe that the number of points starts at an intermediate value, then slightly decreases, and finally rises above the initial value. We still take the average number of integration points across the whole training as a reference because it is the average number of times the model is evaluated at each epoch and is a good indicator of the cost of a numerical method. We compare AQ against MC integration to solve the Poisson equation in dimensions one and two, both in the strong and weak form using the Nitsche formulation. We report the relative \(L^{2}\) norm of the pointwise difference, defined as \[E(u,\hat{u})=\frac{\|\hat{u}-u\|_{L^{2}(\Omega)}}{\|\hat{u}\|_{L^{2}(\Omega)}}.\] The loss function corresponding to each problem is introduced in Subsect. 6.1 and the manufactured solutions we consider are presented in Subsect. 6.2. We give an outline of our experiments and short conclusions for each of them. * We conduct a set of exploratory experiments in Subsect. 6.3 to identify the best learning rate and sampling frequency, as well as to evaluate and compare the robustness of AQ and MC to the number of integration points. We find that regardless of the integration method, the strong Poisson problem brings about higher levels of noise during the training phase compared to the weak Poisson formulation. In general, AQ can reach lower errors than MC and reduce this noise to a certain extent while using fewer integration points. This is especially true for the weak Poisson problem, where the convergence is noticeably faster and smoother with AQ. * In Subsect. 6.4, we show that models trained with our proposed integration method are more robust to parameter initialisation compared to MC. Furthermore, AQ consistently leads to lower errors than MC even when it relies on fewer integration points. * The next round of experiments in Subsect. 6.5 shows that it is possible to reduce the number of integration points by merging small regions with their neighbours. * Finally in Subsect. 6.6 we solve a Poisson equation on a slightly more complex domain. We find that for a similar number of integration points, AQ reduces the error of MC by \(70\%\). We provide a summary of our experiments in Subsect. 6.7 and discuss limitations and possible extensions of our method in Subsect. 6.8. In all the following tables, \(N_{\Omega}\) and \(N_{\Gamma}\) stand for the number of integration points in the domain and on the boundary, respectively. The letters \(P\) and \(O\) denote the number of pieces in the approximation of the activation function by a CPWL function, and the order of the quadrature in each cell of the mesh adapted to the network. ### Study cases In the following experiments, we consider two simple domains: the segment \(\Omega_{1}=[-1,+1]\) and the square \(\Omega_{2}=[-1,+1]^{2}\). We weakly enforce Dirichlet boundary conditions on the whole boundary of the domain. We introduce the bilinear and linear forms corresponding to each problem below. **Strong formulation:**: Let \(f:\Omega\to\mathbb{R}\) be a function of class \(\mathcal{C}^{2}(\Omega)\). We want to solve the equation \(\Delta u=\Delta f\) with the Dirichlet boundary condition \(u_{|\Gamma}=f_{|\Gamma}\). The original PINN formulation corresponds to the strong form of the PDE and is associated with the loss \[J(u)=\frac{1}{2}\|\Delta u-\Delta f\|_{\Omega}^{2}+\frac{\beta}{2}\|u-f\|_{ \Gamma}^{2},\] where \(\beta>0\) is a weight factor for the boundary condition. We transform the squared \(L^{2}\) norms as inner products and obtain the following bilinear and linear forms \[a(u,v) =\left\langle\Delta u,\Delta v\right\rangle_{\Omega}+\beta\left\langle u,v\right\rangle_{\Gamma},\] \[\ell(v) =\left\langle\Delta f,\Delta v\right\rangle_{\Omega}+\beta\left\langle f,v\right\rangle_{\Gamma}.\] **Weak formulation:**: We also solve a weak form of the Poisson equation and use the Nitsche method to obtain a symmetric bilinear form. In this setting, we only require \(f\) to be of class \(\mathcal{C}^{1}(\Omega)\). The bilinear and linear forms are as follows \[a(u,v) =\left\langle\nabla u,\nabla v\right\rangle_{\Omega}-\left\langle \nabla u\cdot n,v\right\rangle_{\Gamma}-\left\langle u,\nabla v\cdot n \right\rangle_{\Gamma}+\beta\left\langle u,v\right\rangle_{\Gamma},\] \[\ell(v) =\left\langle-\Delta f,v\right\rangle_{\Omega}-\left\langle f, \nabla v\cdot n\right\rangle_{\Gamma}+\beta\left\langle f,v\right\rangle_{ \Gamma}.\] Here, \(n\) is the outward-pointing unit normal vector to \(\Gamma\) and \(\beta>0\) is a coefficient large enough so that the bilinear form is coercive. In all our experiments, the relative \(L^{2}\) norm is evaluated with a quadrature of order \(10\) on a \(100\)-cell uniform mesh in 1D and on a \(100\times 100\) uniform grid in 2D. ### Activation functions and forcing terms We solve the two problems we have just described with different forcing terms \(f\). Throughout our experiments, all the problems that involve a given forcing term are solved using the same activation function. We report two groups of forcing terms and their corresponding activation in Tab. 1. The notation \(\mathrm{sinc}\) stands for the cardinal sine function \(x\mapsto\sin(x)/x\), defined on \(\mathbb{R}\backslash\{0\}\) and continuously extended at zero by setting \(\mathrm{sinc}(0)=1\). ### Relationship between integration method, learning rate and sampling frequency In this initial experiment, we focus on solving the four one-dimensional problems (\(\mathrm{ReLU}_{\varepsilon}\)/\(\tanh\), weak/strong) so that exploring a large space of hyperparameters remains computationally affordable. All the networks are trained from the same seed, which means that they are initialised from the same set of weights and biases, and the MC integration points are sampled in the same way across all experiments. We study the connection between the learning rate, the frequency at which we resample the integration points and the integration method. We set the learning rate to \(10^{-2}\) or \(10^{-3}\) and resample the integration points every \(1\), \(10\), \(100\), \(500\), \(1000\) or \(5000\) epochs. Since the networks are trained for \(5000\) epochs, the last frequency corresponds to fixed integration points. We compare MC with \(50\) or \(100\) points against AQ with several choices of quadrature order and number of pieces in \(\pi[\rho]\). When the domain is one-dimensional, the boundary integrals reduce to evaluating the integrand at \(\Gamma_{1}=\{-1,+1\}\). We also solve the weak Poisson problem in dimension two to further assess the effect of the integration method when the sampling frequency is fixed. #### 6.3.1. Strong Poisson in 1D The relative \(L^{2}\) norm after training the network for the two one-dimensional strong Poisson problems is shown in Tab. 2 and Tab. 3. We first observe that the sampling frequency and the number of integration points have a large influence on the final error. It is unclear whether increasing the number of integration points helps improve performance, as both trends appear for the two activation functions and integration methods. Generally speaking, we note that AQ works best with high refresh rates (every \(100\) or fewer epochs), especially when the number of pieces and the order of the quadrature are low. However, it is almost always the case that AQ reaches similar or better accuracies than MC with fewer points. For instance, when the resampling rate is set to \(10\) epochs and the learning rate to \(10^{-2}\) for the \(\mathrm{ReLU}_{\varepsilon}\) activation function, a quadrature of order \(2\) with \(3\) pieces involves \(57\) points on average and leads to a final error of \(6.25\times 10^{-3}\) while all of the MC settings bring to higher errors. We notice that using a higher-order quadrature or more pieces in the CPWL approximation of the activation function reduces the error more consistently than increasing the number of MC points, even though it is not systematic in either case. To illustrate the learning phase, we plot the learning curve and final pointwise error in two different settings in Fig. 4 and Fig. 5. We remark that the levels of noise in the two scenarios are very different. In the first case, the convergence with AQ is less perturbed by noise than that with MC, and the error seems to decay faster with AQ than with MC. In the second case, the training suffers from high levels of noise for all integration settings, and this is a representative example of most of the networks solving the strong Poisson problem. To summarise, the strong Poisson problem brings about high levels of noise during the training phase for both integration methods. Still, AQ can reach lower errors than MC and reduce this noise to a certain extent while using fewer integration points. \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline \multirow{2}{*}{\(\eta\)} & \multirow{2}{*}{\(N_{\mathrm{\Pi}}\left(P,\,O\right)\)} & \multicolumn{6}{c}{\(\nu\)} \\ \cline{3-8} & & 1 & 10 & 100 & 500 & 1000 & 5000 \\ \hline \multirow{8}{*}{\(10^{-2}\)} & \multirow{3}{*}{MC} & 50 & \(2.65\times 10^{-2}\) & \(3.50\times 10^{-2}\) & \(6.81\times 10^{-2}\) & \(4.01\times 10^{-1}\) & \(1.76\times 10^{-1}\) & \(5.79\times 10^{-2}\) \\ & & 100 & \(2.97\times 10^{-2}\) & \(7.99\times 10^{-2}\) & \(3.18\times 10^{-2}\) & \(1.48\times 10^{-2}\) & \(4.24\times 10^{-2}\) & \(2.51\times 10^{-1}\) \\ \cline{2-8} & & 25 (2, 2) & \(1.11\times 10^{-1}\) & \(1.05\times 10^{-1}\) & \(3.00\times 10^{-2}\) & \(2.39\times 10^{-2}\) & \(4.47\times 10^{-2}\) & \(1.97\times 10^{-0}\) \\ & & 42 (2, 5) & \(5.13\times 10^{-2}\) & \(3.18\times 10^{-2}\) & \(9.43\times 10^{-2}\) & \(6.39\times 10^{-2}\) & \(1.52\times 10^{-2}\) & \(1.62\times 10^{-0}\) \\ & & 75 (2, 10) & \(1.84\times 10^{-3}\) & \(2.26\times 10^{-2}\) & \(1.22\times 10^{-2}\) & \(3.83\times 10^{-2}\) & \(7.25\times 10^{-3}\) & \(4.32\times 10^{-1}\) \\ & & 57 (3, 2) & \(2.55\times 10^{-2}\) & \(6.25\times 10^{-3}\) & \(1.70\times 10^{-1}\) & \(2.60\times 10^{-1}\) & \(2.46\times 10^{-1}\) & \(3.04\times 10^{-2}\) \\ & & 84 (3, 5) & \(1.08\times 10^{-2}\) & \(3.81\times 10^{-2}\) & \(1.32\times 10^{-2}\) & \(1.76\times 10^{-3}\) & \(3.76\times 10^{-2}\) & \(2.65\times 10^{-2}\) \\ & & 86 (5, 2) & \(7.56\times 10^{-2}\) & \(6.18\times 10^{-3}\) & \(6.66\times 10^{-3}\) & \(3.58\times 10^{-2}\) & \(1.04\times 10^{-2}\) & \(1.09\times 10^{-0}\) \\ \hline \multirow{8}{*}{\(10^{-3}\)} & \multirow{3}{*}{MC} & 50 & \(4.67\times 10^{-2}\) & \(1.15\times 10^{-1}\) & \(1.10\times 10^{-1}\) & \(1.06\times 10^{-1}\) & \(4.40\times 10^{-1}\) & \(5.77\times 10^{-1}\) \\ & & 100 & \(4.29\times 10^{-2}\) & \(1.17\times 10^{-1}\) & \(4.03\times 10^{-2}\) & \(3.18\times 10^{-2}\) & \(6.50\times 10^{-3}\) & \(1.44\times 10^{-1}\) \\ \cline{2-8} & & 28 (2, 2) & \(1.91\times 10^{-2}\) & \(4.64\times 10^{-2}\) & \(6.49\times 10^{-2}\) & \(4.35\times 10^{-2}\) & \(7.09\times 10^{-2}\) & \(2.41\times 10^{-0}\) \\ & & 44 (2, 5) & \(1.80\times 10^{-2}\) & \(6.67\times 10^{-3}\) & \(1.88\times 10^{-2}\) & \(2.49\times 10^{-2}\) & \(2.81\times 10^{-2}\) & \(1.85\times 10^{-0}\) \\ AQ & 78 (2, 10) & \(1.01\times 10^{-2}\) & \(6.75\times 10^{-3}\) & \(2.23\times 10^{-2}\) & \(5.03\times 10^{-3}\) & \(5.74\times 10^{-3}\) & \(1.26\times 10^{-0}\) \\ & & 59 (3, 2) & \(3.87\times 10^{-2}\) & \(3.84\times 10^{-2}\) & \(1.32\times 10^{-2}\) & \(7.94\times 10^{-2}\) & \(1.71\times 10^{-1}\) & \(1.33\times 10^{-2}\) \\ & & 86 (3, 5) & \(1.40\times 10^{-2}\) & \(3.87\times 10^{-2}\) & \(1.61\times 10^{-2}\) & \(9.66\times 10^{-3}\) & \(6.76\times 10^{-3}\) & \(7.50\times 10^{-3}\) \\ & & 89 (5, 2) & \(6.32\times 10^{-3}\) & \(1.63\times 10^{-2}\) & \(3.50\times 10^{-2}\) & \(6.45\times 10^{-3}\) & \(3.03\times 10^{-2}\) & \(1.38\times 10^{-2}\) \\ \hline \hline \end{tabular} \end{table} Table 2. Comparison of the relative \(L^{2}\) norm for the strong Poisson problem with the \(\mathrm{ReLU}_{\varepsilon}\) activation, depending on learning rate, resampling frequency and integration method. Figure 4. Learning curve (a) and pointwise error (b) for the strong Poisson problem with the \(\tanh\) activation, depending on learning rate, resampling frequency and integration method. \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline \multirow{2}{*}{\(\eta\)} & \multirow{2}{*}{\(N_{\mathrm{\Pi}}\left(P,\,O\right)\)} & \multicolumn{6}{c}{\(\nu\)} \\ \cline{3-8} & & 1 & 10 & 100 & 500 & 1000 & 5000 \\ \hline \multirow{8}{*}{\(10^{-2}\)} & \multirow{3}{*}{MC} & 50 & \(2.62\times 10^{-2}\) & \(2.81\times 10^{-2}\) & \(7.34\times 10^{-3}\) & \(5.55\times 10^{-2}\) & \(2.75\times 10^{-2}\) & \(4.61\times 10^{-1}\) \\ & & 100 & \(2.96\times 10^{-2}\) & \(1.17\times 10^{-2}\) & \(2.81\times 10^{-2}\) & \(1.72\times 10^{-3}\) & \(9.02\times 10^{-3}\) & \(4.71\times 10^{-2}\) \\ \cline{2-8} & & 38 (3, 2) & \(2.86\times 10^{-2}\) & \(6.26\times 10^{-3}\) & \(5.95\times 10^{-2}\) & \(1.83\times 10^{-2}\) & \(4.11\times 10^{-2}\) & \(7.62\times 10^{-0}\) \\ & & 57 (3, 5) & \(3.13\times 10^{-3}\) & \(3.84\times 10^{-3}\) & \(2.65\times 10^{-3}\) & \(2.19\times 10^{-3}\) & \(1.37\times 10^{-3}\) & \(1.54\times 10^{-0}\) \\ & AQ & 118 (3, 10) & \(1.73\times 10^{-3}\) & \(1.54\times 10^{-4}\) & \(1.74\times 10^{-3}\) & \(1.41\times 10^{-2}\) & \(6.11\times 10^{-2}\) & \(2.92\times 10^{-0}\) \\ & & 76 (5, 2) & \(8.55\times 10^{-2}\) & \(2.06\times 10^{-3}\ sampling frequency when the network is trained with AQ. When relying on MC, the variance of the final error is much higher and the dependence between these the variance and the number of integration points is unclear. When the sampling frequency is high enough (every \(100\) or fewer epochs), we observe that the number of integration points can very often be reduced while keeping the same levels of error. For instance, in the problem with the \(\tanh\) function with a learning rate of \(10^{-2}\) and a resampling frequency of \(10\), shifting from \(7\) pieces to \(5\) or \(3\) pieces while keeping order \(2\) does not deteriorate the performance of AQ. However, reducing the number of MC integration points increases the error in most cases. To further compare the training phase with AQ and MC, we plot the learning curve and pointwise error for the \(\tanh\) activation in Fig. 6 and for the \(\operatorname{ReLU}_{\varepsilon}\) activation in Fig. 7. In both cases, we remark that the convergence of the network is significantly smoother with AQ than with MC, even with few integration points. Moreover, it is clear that the \(L^{2}\) norm decreases in the case of AQ, whereas it seems to plateau with MC. Finally, it is interesting to note that the learning curves follow the same trend at the beginning of the training. After a few hundred epochs, the learning curves corresponding to AQ keep on decreasing while the ones corresponding to MC break away from this trend and start to suffer from high levels of noise. In conclusion, solving the weak Poisson problem with AQ is computationally cheaper and shows a considerably faster and smoother convergence than with MC. #### 6.3.3. Weak Poisson in 2D Based on the experiments in dimension one, we conclude that the integration points should be resampled around every \(10\) epochs and the learning rate should be set to \(10^{-2}\) to obtain optimal performances with both integration methods. We keep these hyperparameters in the rest of our experiments. \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline \multirow{2}{*}{\(\eta\)} & \multirow{2}{*}{\(N_{\Omega}\left(P,O\right)\)} & \multicolumn{6}{c}{\(\nu\)} \\ \cline{3-8} & & & \(1\) & \(10\) & \(100\) & \(500\) & \(1000\) & \(5000\) \\ \hline \multirow{4}{*}{MC} & \(50\) & \(4.82\times 10^{-2}\) & \(1.69\times 10^{-1}\) & \(1.41\times 10^{-0}\) & \(5.08\times 10^{-0}\) & \(8.96\times 10^{-0}\) & \(1.82\times 10^{+3}\) \\ & \(100\) & \(4.29\times 10^{-1}\) & \(6.39\times 10^{-1}\) & \(1.68\times 10^{-0}\) & \(1.40\times 10^{-0}\) & \(2.11\times 10^{+1}\) & \(7.26\times 10^{+3}\) \\ \cline{2-8} & \(24\) (\(2,\,2\)) & \(2.87\times 10^{-2}\) & \(2.05\times 10^{-1}\) & \(8.80\times 10^{-2}\) & \(1.14\times 10^{-0}\) & \(3.54\times 10^{+2}\) & \(4.43\times 10^{+5}\) \\ & \(43\) (\(2,\,5\)) & \(1.04\times 10^{-2}\) & \(1.00\times 10^{-2}\) & \(4.90\times 10^{-2}\) & \(1.81\times 10^{-0}\) & \(2.04\times 10^{+2}\) & \(5.28\times 10^{+5}\) \\ & AQ & \(88\) (\(2,\,10\)) & \(8.51\times 10^{-3}\) & \(8.24\times 10^{-3}\) & \(6.89\times 10^{-2}\) & \(2.05\times 10^{-1}\) & \(1.10\times 10^{-0}\) & \(3.11\times 10^{+3}\) \\ \(10^{-2}\) & \(61\) (\(3,\,2\)) & \(8.12\times 10^{-3}\) & \(1.37\times 10^{-2}\) & \(1.59\times 10^{-2}\) & \(5.13\times 10^{-2}\) & \(4.83\times 10^{-0}\) & \(5.49\times 10^{+3}\) \\ & \(90\) (\(3,\,5\)) & \(8.74\times 10^{-3}\) & \(1.03\times 10^{-2}\) & \(1.02\times 10^{-2}\) & \(8.71\times 10^{-3}\) & \(7.79\times 10^{-3}\) & \(3.47\times 10^{+2}\) \\ & \(91\) (\(5,\,2\)) & \(9.67\times 10^{-3}\) & \(1.30\times 10^{-2}\) & \(6.35\times 10^{-3}\) & \(7.78\times 10^{-3}\) & \(1.49\times 10^{-2}\) & \(9.35\times 10^{+1}\) \\ \hline \multirow{4}{*}{MC} & \(50\) & \(1.91\times 10^{-1}\) & \(1.70\times 10^{-1}\) & \(9.88\times 10^{-1}\) & \(4.93\times 10^{-0}\) & \(4.97\times 10^{-0}\) & \(1.00\times 10^{+1}\) \\ & MC & \(100\) & \(2.73\times 10^{-3}\) & \(2.32\times 10^{-1}\) & \(1.37\times 10^{-0}\) & \(8.35\times 10^{-1}\) & \(6.46\times 10^{-1}\) & \(6.55\times 10^{+1}\) \\ \cline{2-8} & \(26\) (\(2,\,2\)) & \(5.67\times 10^{-2}\) & \(3.50\times 10^{-2}\) & \(5.15\times 10^{-2}\) & \(1.26\times 10^{-0}\) & \(3.02\times 10^{-0}\) & \(2.32\times 10^{+3}\) \\ \cline{2-8} & \(46\) (\(2,\,5\)) & \(8.66\times 10^{-3}\) & \(7.24\times 10^{-3}\) & \(1.98\times 10^{-2}\) & \(4.13\times 10^{-3}\) & \(1.70\times 10^{-1}\) & \(1.78\times 10^{+3}\) \\ \cline{2-8} & \(93\) (\(2,\,10\)) & \(5.87\times 10^{-3}\) & \(1.04\times 10^{-2}\) & \(5.71\times 10^{-3}\) & \(5.57\times 10^{-3}\) & \(5.28\times 10^{+2}\) \\ \(10^{-3}\) & \(66\) (\(3,\,2\)) & \(2.30\times 10^{-2}\) & \(7.99\times 10^{-3}\) & \(1.42\times 10^{-2}\) & \(1.19\times 10^{-2}\) & \(3.26\times 10^{-2}\) & \(3.90\times 10^{-0}\) \\ & \(96\) (\(3,\,5\)) & \(1.97\times 10^{-2}\) & \(1.79\times 10^{-0}\) & \(3.78\times 10^{-2}\) & \(5.46\times 10^{-3}\) & \(5.53\times 10^{-3}\) & \(6.42\times 10^{-3}\) \\ & \(90\) (\(5,\,2\)) & \(1.78\times 10^{-2}\) & \(6.80\times 10^{-3}\) & \(6.61\times 10^{-3}\) & \(1.16\times 10^{-2}\) & \(9.72\times 10^{-3}\) & \(1.83\times 10^{-0}\) \\ \hline \hline \end{tabular} \end{table} Table 4. Comparison of the relative \(L^{2}\) norm for the weak Poisson problem with the \(\operatorname{ReLU}_{\varepsilon}\) activation, depending on learning rate, resampling frequency and integration method. Figure 5. Learning curve (a) and pointwise error (b) for the strong Poisson problem with the \(\operatorname{ReLU}_{\varepsilon}\) activation. The learning rate is \(\eta=10^{-2}\) and the points are resampled every \(10\) epochs. \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline \multirow{2}{*}{\(\eta\)} & \multirow{2}{*}{\(N_{\Omega}\left(P,O\right)\)} & \multicolumn{6}{c}{\(\nu\)} \\ \cline{3-8} & & \multicolumn{2}{c}{1} & \multicolumn{2}{c}{10} & \multicolumn{2}{c}{100} & \multicolumn{2}{c}{500} & \multicolumn{2}{c}{1000} & \multicolumn{2}{c}{5000} \\ \hline \multirow{4}{*}{MC} & \(50\) & \(7.67\times 10^{-2}\) & \(4.99\times 10^{-1}\) & \(4.64\times 10^{-1}\) & \(2.86\times 10^{-0}\) & \(8.05\times 10^{-0}\) & \(3.45\times 10^{+2}\) \\ & \(100\) & \(1.75\times 10^{-1}\) & \(4.92\times 10^{-1}\) & \(9.31\times 10^{-1}\) & \(1.55\times 10^{-0}\) & \(1.64\times 10^{-0}\) & \(4.78\times 10^{-0}\) \\ \cline{3-8} & \(40\,(3,2)\) & \(3.87\times 10^{-2}\) & \(1.49\times 10^{-2}\) & \(1.14\times 10^{-1}\) & \(1.34\times 10^{-0}\) & \(1.19\times 10^{+1}\) & \(4.21\times 10^{+2}\) \\ & \(46\,(3,5)\) & \(8.40\times 10^{-1}\) & \(3.06\times 10^{-2}\) & \(3.47\times 10^{-2}\) & \(2.19\times 10^{-2}\) & \(1.77\times 10^{-0}\) & \(6.64\times 10^{+1}\) \\ & AQ & \(110\,(3,10)\) & \(7.89\times 10^{-3}\) & \(4.56\times 10^{-3}\) & \(8.17\times 10^{-3}\) & \(5.09\times 10^{-2}\) & \(1.45\times 10^{-0}\) & \(3.92\times 10^{+2}\) \\ \(10^{-2}\) & \(84\,(5,2)\) & \(7.41\times 10^{-3}\) & \(1.36\times 10^{-2}\) & \(3.41\times 10^{-3}\) & \(4.18\times 10^{-2}\) & \(4.27\times 10^{-1}\) & \(4.34\times 10^{+2}\) \\ & \(108\,(5,5)\) & \(3.73\times 10^{-3}\) & \(4.77\times 10^{-3}\) & \(5.53\times 10^{-3}\) & \(9.31\times 10^{-3}\) & \(5.17\times 10^{-1}\) & \(3.05\times 10^{+2}\) \\ & \(98\,(7,2)\) & \(3.75\times 10^{-3}\) & \(9.94\times 10^{-3}\) & \(3.71\times 10^{-3}\) & \(5.95\times 10^{-2}\) & \(1.67\times 10^{-1}\) & \(1.66\times 10^{+2}\) \\ \hline \multirow{4}{*}{MC} & \(50\) & \(2.52\times 10^{-1}\) & \(2.54\times 10^{-1}\) & \(9.53\times 10^{-1}\) & \(2.76\times 10^{-0}\) & \(3.42\times 10^{-0}\) & \(1.47\times 10^{+1}\) \\ & MC & \(100\) & \(3.16\times 10^{-1}\) & \(3.77\times 10^{-1}\) & \(1.34\times 10^{-0}\) & \(2.56\times 10^{-0}\) & \(1.77\times 10^{-0}\) & \(1.87\times 10^{-0}\) \\ \cline{3-8} & \(21\,(3,2)\) & \(1.59\times 10^{-1}\) & \(1.59\times 10^{-0}\) & \(1.21\times 10^{-1}\) & \(5.16\times 10^{-1}\) & \(5.55\times 10^{-0}\) & \(6.10\times 10^{+1}\) \\ & \(26\,(3,5)\) & \(5.44\times 10^{-1}\) & \(1.66\times 10^{-0}\) & \(3.47\times 10^{-1}\) & \(3.43\times 10^{-2}\) & \(1.54\times 10^{-1}\) & \(1.86\times 10^{-0}\) \\ AQ & \(57\,(3,10)\) & \(6.52\times 10^{-1}\) & \(6.20\times 10^{-1}\) & \(5.57\times 10^{-2}\) & \(4.92\times 10^{-2}\) & \(5.12\times 10^{-2}\) & \(3.86\times 10^{+1}\) \\ & \(50\,(5,2)\) & \(5.41\times 10^{-2}\) & \(3.90\times 10^{-2}\) & \(7.87\times 10^{-2}\) & \(5.77\times 10^{-2}\) & \(2.47\times 10^{-2}\) & \(5.02\times 10^{+1}\) \\ & \(71\,(5,5)\) & \(2.32\times 10^{-2}\) & \(2.31\times 10^{-2}\) & \(4.37\times 10^{-2}\) & \(6.62\times 10^{-2}\) & \(5.95\times 10^{-2}\) & \(2.18\times 10^{+1}\) \\ & \(41\,(7,2)\) & \(2.35\times 10^{-1}\) & \(1.81\times 10^{-1}\) & \(2.24\times 10^{-1}\) & \(1.08\times 10^{-1}\) & \(2.78\times 10^{-1}\) & \(1.50\times 10^{-0}\) \\ \hline \hline \end{tabular} \end{table} Table 5. Comparison of the relative \(L^{2}\) norm for the weak Poisson problem with the \(\tanh\) activation, depending on learning rate, resampling frequency and integration method. Figure 6. Learning curve (a) and pointwise error (b) for the weak Poisson problem with the \(\tanh\) activation. The learning rate is \(\eta=10^{-2}\) and the points are resampled every \(10\) epochs. Figure 7. Learning curve (a) and pointwise error (b) for the weak Poisson problem with the \(\operatorname{ReLU}_{\varepsilon}\) activation. The learning rate is set to \(10^{-2}\) and the points are resampled every \(10\) epochs. We find that AQ can reach similar or lower errors with less than half the number of points MC needs. With the \(\mathrm{ReLU}_{\varepsilon}\) activation, AQ with fewer than \(1000\) points and MC with \(10000\) points reach comparable performances. In the case of the \(\tanh\) activation, AQ with around \(1660\) points wins over MC with \(10000\) points. However, we notice that increasing the order of the quadrature or the number of pieces does not significantly reduce the relative \(L^{2}\) norm. Fig. 8 and Fig. 9 display the pointwise error between the ground truth and the solutions obtained with MC and AQ. In the case of \(\tanh\), we remark that MC struggles to model the transition between the high and low regions. Similarly for \(\mathrm{ReLU}_{\varepsilon}\), MC does not properly handle the region around the origin, where the solution shows larger variations. In both cases, the pointwise error for the model trained with AQ is much more homogeneous across the whole domain. ### Robustness to initialisation In this second round of experiments, we study the robustness of our integration method to the initialisation of the network. We solve the weak Poisson problems in dimensions one and two. The learning rate is set to \(10^{-2}\) and we resample the integration points every \(10\) epochs. We report the minimum, maximum, average and standard deviation of the final \(L^{2}\) norm on \(10\) random initialisations, as well as the average training time in Tab. 6(a) and Tab. 6(b) for the \(\mathrm{ReLU}_{\varepsilon}\) and \(\tanh\) activations respectively. We use the same seeds for MC and AQ in both experiments. The number of integration points for MC is chosen so that it is larger than the maximum number of points for the same experiments run with AQ. The exact choices of quadrature order, number of pieces and integration points are reported in the corresponding tables. We find that AQ has a consistently lower average \(L^{2}\) norm than MC. By comparing the standard deviation of the error, we infer that the solutions obtained by AQ are less dependent on the network initialisation compared to MC. Besides, we observe that the distributions of the errors obtained with AQ and MC do not overlap: the \begin{table} \end{table} Table 6. Relative \(L^{2}\) norm for the weak Poisson problems in 2D with the \(\mathrm{ReLU}_{\varepsilon}\) (a) and \(\tanh\) (b) activations for various integration hyperparameters. Figure 8. Comparison of the pointwise error for the weak Poisson problem with the \(\tanh\) activation. The learning rate is \(\eta=10^{-2}\) and the points are resampled every \(10\) epochs. MC is trained with \(5000\) points and AQ with \(5\) pieces and order \(3\) (\(2964\) points). maximum error with AQ is lower than the minimum error with MC. More importantly, we find that AQ enables a reduction of the number of integration points while keeping similar or shorter training times. We conclude that our proposed integration method is more robust to initialisation than MC and that it obtains higher accuracies than MC with fewer integration points. ### Reduction of the number of integration points In this set of experiments, we want to show that it is possible to further reduce the number of integration points generated by AQ by merging small regions with their neighbours, without sacrificing the accuracy of our method. We keep the same hyperparameters as in the previous paragraph and focus on the two-dimensional problems. Every time the mesh is updated, we compute the median of the size of the cells. We then identify the cells that are smaller than a given fraction of this median and merge them one by one with their neighbours. We always merge a cell with the largest of its neighbours. In dimension two, after all the cells are merged in this way, the mesh needs an extra post-processing step in order to account for non-convex cells. Indeed, the union of two convex polygons may be non-convex. We split non-convex aggregated cells with the ear clipping algorithm [22], and apply our splitting strategy to all resulting cells to obtain triangles and convex quadrangles. We experiment with several merging thresholds and report our findings in Tab. 8. In dimension one, we find that merging regions up to \(50\%\) of the median of the regions size does not significantly affect the error while \begin{table} \end{table} Table 7. Distribution of the relative \(L^{2}\) norm for the weak Poisson problem with the \(\mathrm{ReLU}_{\varepsilon}\) (a) and \(\tanh\) (b) activations on \(10\) random initialisations. The learning rate is set to \(10^{-2}\) and the points are resampled every \(10\) epochs. Figure 9. Comparison of the pointwise error for the weak Poisson problem with the \(\mathrm{ReLU}_{\varepsilon}\) activation. The learning rate is set to \(10^{-2}\) and the points are resampled every \(10\) epochs. MC is trained with \(5000\) points and AQ with \(2\) pieces and order \(5\) (\(2681\) points). allowing for a sizeable reduction of the number of integration points. In the two-dimensional case, raising the merging threshold above \(25\%\) harms the performance of the \(\mathrm{ReLU}_{\varepsilon}\) network, but not that of the \(\tanh\) network, as the number of integration points does not decrease. It appears that increasing the merging threshold does not always reduce the number of integration points. Indeed, the aggregated regions may become less and less convex, so they will need to be split into several convex regions. We illustrate the kind of meshes that a network generates and how they are merged in Fig. 10. These meshes are obtained from a trained \(\tanh\) network solving the weak Poisson problem. We found that the meshes from the previous round of experiments were easier to interpret, so we selected one of the \(10\) corresponding models. The boundary of the circular region can be easily identified. The lines that connect two sides of the square domain come from the first layer whereas the other lines come from the second layer. We observe that non-convex regions appear when small regions are merged. ### More complex domains We complete our numerical experiments by solving a weak Poisson equation on a more complex two-dimensional domain, defined as the union of two rhombi. We solve the following Poisson Figure 10. Final mesh, number of cells and number of integration points at several region merging thresholds for the weak Poisson problem and the \(\tanh\) activation. The \(\tanh\) activation is cut into \(5\) pieces and the quadrature is of order \(2\). The merging thresholds are \(0\%\) (a) and \(25\%\) (b). \begin{table} \begin{tabular}{c c c c} \hline \hline Dim. & & (\(N_{\Omega}\), \(N_{\Gamma}\)) (Threshold) & \(L^{2}\) norm \\ \hline \multirow{4}{*}{1} & \multirow{2}{*}{MC} & (200, 2) & \(6.39\times 10^{-1}\) \\ & & (100, 2) & \(4.55\times 10^{-1}\) \\ \cline{3-4} & & (290, 2) (\(0\%\)) & \(8.25\times 10^{-3}\) \\ & & (198, 2) (\(50\%\)) & \(7.82\times 10^{-3}\) \\ & & (144, 2) (\(100\%\)) & \(4.41\times 10^{-2}\) \\ \hline \multirow{4}{*}{2} & \multirow{2}{*}{MC} & (5000, 500) & \(1.43\times 10^{-1}\) \\ & & (1000, 100) & \(2.35\times 10^{-1}\) \\ \cline{3-4} & & (4041, 158) (\(0\%\)) & \(6.40\times 10^{-2}\) \\ \cline{3-4} & & (3562, 148) (\(10\%\)) & \(7.27\times 10^{-2}\) \\ \cline{3-4} & & (3217, 132) (\(25\%\)) & \(2.78\times 10^{-2}\) \\ \cline{3-4} & & (2809, 115) (\(50\%\)) & \(1.41\times 10^{-1}\) \\ \hline \hline \end{tabular} \begin{tabular}{c c c} \hline Dim. & (\(N_{\Omega}\), \(N_{\Gamma}\)) (Threshold) & \(L^{2}\) norm \\ \hline \multirow{4}{*}{1} & \multirow{2}{*}{MC} & (200, 2) & \(2.34\times 10^{-1}\) \\ & & (100, 2) & \(4.92\times 10^{-1}\) \\ \cline{3-4} & & (138, 2) (\(0\%\)) & \(3.08\times 10^{-3}\) \\ & AQ & (94, 2) (\(50\%\)) & \(4.18\times 10^{-3}\) \\ & & (85, 2) (\(100\%\)) & \(2.72\times 10^{-2}\) \\ \hline \multirow{4}{*}{2} & \multirow{2}{*}{MC} & (10000, 1000) & \(6.61\times 10^{-2}\) \\ & & (5000, 500) & \(8.51\times 10^{-2}\) \\ \cline{3-4} & & (7798, 259) (\(0\%\)) & \(5.02\times 10^{-2}\) \\ \cline{3-4} & AQ & (6895, 218) (\(10\%\)) & \(3.96\times 10^{-2}\) \\ \cline{3-4} & & (8336, 244) (\(25\%\)) & \(5.26\times 10^{-2}\) \\ \cline{3-4} & & (9088, 204) (\(50\%\)) & \(7.19\times 10^{-2}\) \\ \hline \end{tabular} \begin{tabular}{c c} \hline Dim. & (\(N_{\Omega}\), \(N_{\Gamma}\)) (Threshold) & \(L^{2}\) norm \\ \hline \multirow{4}{*}{1} & \multirow{2}{*}{MC} & (200, 2) & \(2.34\times 10^{-1}\) \\ & & (100, 2) & \(4.92\times 10^{-1}\) \\ \cline{3-4} & AQ & (138, 2) (\(0\%\)) & \(3.08\times 10^{-3}\) \\ \cline{3-4} & AQ & (94, 2) (\(50\%\)) & \(4.18\times 10^{-3}\) \\ \cline{3-4} & & (85, 2) (\(100\%\)) & \(2.72\times 10^{-2}\) \\ \hline \multirow{4}{*}{2} & \multirow{2}{*}{MC} & (10000, 1000) & \(6.61\times 10^{-2}\) \\ & & (5000, 500) & \(8.51\times 10^{-2}\) \\ \cline{3-4} & & (7798, 259) (\(0\%\)) & \(5.02\times 10^{-2}\) \\ \cline{3-4} & AQ & (6895, 218) (\(10\%\)) & \(3.96\times 10^{-2}\) \\ \cline{3-4} & & (8336, 244) (\(25\%\)) & \(5.26\times 10^{-2}\) \\ \cline{3-4} & & (9088, 204) (\(50\%\)) & \(7.19\times 10^{-2}\) \\ \hline \end{tabular} \begin{tabular}{c c} \hline Dim. & (\(N_{\Omega}\), \(N_{\Gamma}\)) (Threshold) & \(L^{2}\) norm \\ \hline \multirow{4}{*}{1} & \multirow{2}{*}{MC} & (200, 2) & \(2.34\times 10^{-1}\) \\ & & (100, 2) & \(4.92\times 10^{-1}\) \\ \cline{3-4} & AQ & (138, 2) (\(0\%\)) & \(3.08\times 10^{-3}\) \\ \cline{3-4} & AQ & (94, 2) (\(50\%\)) & \(4.18\times 10^{-3}\) \\ \cline{3-4} & & (85, 2) (\(100\%\)) & \(2.72\times 10^{-2}\) \\ \hline \multirow{4}{*}{2} & \multirow{2}{*}{MC} & (10000, 1000) & \(6.61\times 10^{-2}\) \\ & & (5000, 500) & \(8.51\times 10^{-2}\) \\ \cline{3-4} & AQ & (7798, 259) (\(0\%\)) & \(5.02\times 10^{-2}\) \\ \cline{3-4} & AQ & (6895, 218) (\(10\%\)) & \(3.96\times 10^{-2}\) \\ \cline{3-4} & & (8336, 244) (\(25\%\)) & \(5.26\times 10^{-2}\) \\ \cline{3-4} & & (9088, 204) (\(50\%\)) & \(7.19\times 10^{-2}\) \\ \hline \end{tabular} \begin{tabular}{c c} \hline Dim. & (\(N_{\Omega}\), \(N_{\Gamma}\)) (Threshold) & \(L^{2}\) norm \\ \hline \multirow{4}{*}{1} & \multirow{2}{*}{MC} & (200, 2) & \(2.34\times 10^{-1}\) \\ & & (100, 2) & \(4.92\times 10^{-1}\) \\ \cline{3-4} & AQ & (138, 2) (\(0\%\)) & \(3.08\times 10^{-3}\) \\ \cline{3-4} & AQ & (94, 2) (\(50\%\)) & \(4.18\times 10^{-3}\) \\ \cline{3-4} & AQ & (85, 2) (\(100\%\)) & \(2.72\times 10^{-2}\) \\ \hline \multirow{4}{*}{2} & \multirow{2}{*}{MC} & (10000, 1000) & \(6.61\times 10^{-2}\) \\ & & (5000, 500) & \(8.51\times 10^{-2}\) \\ \cline{3-4} & AQ & (779 equation \[\left\{\begin{array}{rl}-\Delta u=x+y&\text{in }\Omega\\ u=0&\text{on }\Gamma\end{array}\right.. \tag{7}\] We obtain an approximation of the solution with the FEM. We rely on the following weak formulation for the FEM solution: \[\left\{\begin{array}{rl}\text{Find }u\in H_{0}^{1}(\Omega)\text{ such that for all }v\in H_{0}^{1}(\Omega),\\ \int_{\Omega}\Delta u\cdot\Delta v\,\mathrm{d}\Omega=\int_{\Omega}(x+y)v\, \mathrm{d}\Omega\end{array}\right.,\] where \(H_{0}^{1}(\Omega)\) is the subset of functions in \(H^{1}(\Omega)\) that vanish on \(\Gamma\). We then obtain a mesh composed of \(3894\) cells from GMSH and solve this FEM problem with the Gridap software [23]. We rely on first-order Lagrange finite elements, and the total number of degrees of freedom is \(1852\). We use the FEM solution as the reference to compute the \(L^{2}\) norm for the solution obtained by a NN. The relative \(L^{2}\) norm is computed with a quadrature of order \(10\) on the cells of the mesh of the FEM solution. We then approximate this equation with NNs using the same penalised weak form that we considered in the previous experiments. We increase the network architecture to \((2,20,20,1)\) and the activation function is \(\mathrm{ReLU}_{\varepsilon}\). We train our network for \(10000\) iterations using the ADAM optimiser with a learning rate of \(10^{-2}\). We cut the \(\mathrm{ReLU}_{\varepsilon}\) activation into \(3\) pieces, use a quadrature of order \(2\) in each cell, and do not merge small regions. The integration points are resampled every \(10\) epochs. On average, AQ involved \(5894\) integration points in the domain and \(292\) on the boundary. We match the computational budget of AQ for MC by sampling \(6000\) in-domain points and \(300\) points on the boundary for MC. To obtain a uniform distribution of integration points for MC, we first perform a triangulation of the domain. Then for as many points as we need, we draw a triangle at random by weighting them by their measure, and sample one point in the corresponding triangle. We plot the solutions obtained by FEM, MC and AQ in Fig. 10(a). The pointwise difference between MC and FEM, and between AQ and FEM is shown in Fig. 10(b). The relative \(L^{2}\) norm is \(7.99\times 10^{-2}\) for MC and \(4.68\times 10^{-2}\) for AQ. We mention that the training times for the last experiment are \(700\) seconds with AQ and \(625\) seconds with MC. We believe that this \(12\%\) extension in time is well worth the \(70\%\) decrease in \(L^{2}\) norm. It is interesting to mention that the pointwise error is located around the same regions in the domain. However when using AQ, the magnitude of the local maxima of the pointwise error is much lower. ### Summary Our numerical experiments confirmed the effectiveness of our proposed integration method in a wide range of scenarios. AQ enables to reach lower generalisation errors compared to MC while using fewer integration points. We have shown that this new integration method is more robust to the initial parameters of the network. Moreover, the convergence of the network is less noisy and the generalisation error decays quicker, as AQ can sustain higher learning rates without introducing extra noise in the learning phase. We are convinced that our proposed integration method can help NNs become a more reliable and predictable tool when it comes to approximating PDEs, so that they can be employed to solve real-world problems. The most significant differences between MC and AQ were observed for the weak Poisson problem. One reason could be that the loss of the strong Poisson problems can be understood in a pointwise sense. In this case, MC minimises the pointwise distance between \(\Delta u\) and \(\Delta f\) at the location of the integration points. On the contrary, the loss of the weak Poisson problem is to be understood and minimised in a global sense. Our experiments also suggest that the benefits of our adaptive quadrature are more striking at the location of strong gradients. ### Discussion Our approach suffers from the curse of dimensionality and is intended to be used for low-dimensional PDEs as a nonlinear alternative to the FEM. We noticed that the number of integration points in dimension one was of the order of a few tens to a few hundreds, when the usual number of integration points used for MC in the literature is rather a few hundreds to a few thousands. In dimension two, our method needs a few thousand integration points, whereas the models in the literature rely on a few tens of thousands of points. Even though our approach is still cheaper than MC in the two-dimensional case, the reduction factor is much smaller compared to the one-dimensional scenario. We found that merging linear regions can help reduce the number of linear regions and thus integration points. This phenomenon should amplify as the dimension increases, but we are only ever concerned with problems arising from physics, which involve space and time only. We would like to extend our method to the three-dimensional case in the future. We have observed that the time spent on extracting the mesh is offset by the reduction in the number of integration points, as the overall training very often takes less time with AQ than with MC. Our idea to regularise the activation function was motivated by the relationship between FEM and \(\mathrm{ReLU}\) networks, as well as the need for a smooth activation so that the energy functional itself is smooth. We believe that it is more economical to use a smoothed CPWL activation function than to directly use a smooth activation such as \(\tanh\) because intuitively fewer pieces are necessary to reach a given tolerance in the former case. Whereas most of the current implementations of PINN and its variants can only handle tensor-product domains or balls, our proposed linearisation algorithm can naturally handle any polygonal domain. This is an important step towards the use of NNs solvers for practical issues. One direction for future work is tackling the variational setting in the general case when it is not possible to recast the problem as an energy minimisation. We point out that our domain decomposition strategy and adaptive quadrature can be extended to higher-order approximations (e.g. splines) of the activation function, which would result in curved boundaries for the regions. ## 7. Conclusion In this work, we introduce a procedure to define adaptive quadratures for smooth NNs suitable for low-dimensional PDE discretisation. It relies on a decomposition of the domain into regions where the network is almost linear. We also show the importance of smoothness in the training process and propose a regularisation of the standard ReLU activation function. We carry out the numerical analysis of the proposed framework to obtain upper bounds of the integration error. Numerical experimentation shows that our integration method helps make the convergence less noisy and quicker compared to Monte Carlo integration. We observe that the number of integration points needed to Figure 11. Approximation (a) and pointwise error (b) for the solution of Eq. (7). compute the loss function can be significantly reduced with our method and that our integration strategy is more robust to the initialisation of the network. We illustrated the benefit of our adaptive integration on the Poisson equation, but it can be extended to any PDE involving a symmetric and coercive bilinear form. While NN solvers are applied to increasingly intricate problems, they still lack reliability and theoretical guarantees before being used on real-world problems. Besides, most PINN frameworks can only handle tensor-product domains, while the FEM can be applied to much more complex domains. The adaptive quadratures we propose, the error bounds we prove, and the seamless possibility to handle polygonal domains of our method represent a few steps further in bridging these gaps. For the sake of reproducibility, our implementation of the proposed adaptive quadrature is openly available at [24]. ## Acknowledgements This research was partially funded by the Australian Government through the Australian Research Council (project number DP220103160). ## Appendix A Proofs ### Proof of Lemma 1 Proof.: Let \(f\in\mathcal{A}\) and \(I\in P(f)\) be a compatible interval for the function \(f\). If \(f\) is linear in \(I\), then for all \(x<y\in I\), \(T[f,x]\) coincides with \(f\) on \([x,y]\), and so do \(T[f,y]\) and \(T[f,x,y]\). Besides \(f^{\prime\prime}=0\) so inequalities Eq. (3) and Eq. (4) hold trivially. We thus suppose that \(f\) is either strictly convex or strictly concave in \(I\), which means that \(f^{\prime\prime}\) is zero at the two ends of \(I\) and nowhere else in \(I\). For \(\delta\geq 0\), we define the set \(E_{\delta}=\{(x,y)\in I^{2},y-x\geq\delta\}\). We also write \(E=E_{0}\). Let now \(\delta>0\) be fixed. For all \((x,y)\in E_{\delta}\), the quantity \(\|f^{\prime\prime}\|_{L^{p/(2p+1)}(]x,y]}\) is not zero since \(y>x\) and the integrand only vanishes on a set of zero measure. This enables us to consider the function \[r:E_{\delta}\ni(x,y)\mapsto\frac{\|f-T[f,x,y]\|_{L^{p}(]x,y]}}{\|f^{\prime \prime}\|_{L^{p/(2p+1)}(]x,y]}}.\] We want to show that \(r\) can be defined on \(E\) as a whole, and that \(r\) is bounded on \(E\). _(i) Continuity and differentiability of \(c\) on \(E_{\delta}\)._ By definition of \(I\), for all \(x<y\in I\), the tangents to \(f\) at \(x\) and \(y\) intersect at an abscissa that we write \(c(x,y)\), with \(x<c(x,y)<y\). More precisely, for finite \(x<y\in I\), the function \(c\) has the following expression \[c(x,y)=-\frac{[f(y)-yf^{\prime}(y)]-[f(x)-xf^{\prime}(x)]}{f^{\prime}(y)-f^{ \prime}(x)},\] in which the denominator is not zero because \(f^{\prime}\) is injective in \(I\). We extend \(c\) to possible infinite values of \(x\) or \(y\) by setting \(c(x,+\infty)=\lim_{y\to+\infty}c(x,y)\) and \(c(-\infty,y)=\lim_{x\to-\infty}c(x,y)\). Since \(f\) and \(f^{\prime}\) are continuous on \(I\), and since \(f^{\prime}\) is injective on \(I\), we conclude that \(c\) is continuous on \(E_{\delta}\). By using the differentiation rule for quotients, similar arguments show that \(c\) is differentiable on \(E_{\delta}\). _(ii) Continuity of \(r\) on \(E_{\delta}\)._ Since \(f\) and \(f^{\prime}\) are continuous on \(\mathbb{R}\), the map \(x\mapsto T[f,x]\) is continuous. Besides, \(c\) is continuous so the map \((x,y)\to T[f,x,y]\) is also continuous. Finally, any map \((x,y)\to\int_{x}^{y}\varphi(t,x,y)\,\text{d}t\) is continuous whenever \(\varphi\) is continuous. Here, \((x,y)\mapsto f-T[f,x,y]\) and \(f^{\prime\prime}\) are continuous, so we conclude that the numerator and the denominator of \(r\) are continuous. Moreover, since \(y-x\geq\delta>0\), the denominator of \(r\) is not zero, which makes \(r\) continuous on \(E_{\delta}\). The fact that \(f\) belongs to \(\mathcal{A}\) ensures that the numerator of \(r\) is bounded. Besides, the denominator is a strictly increasing function of \(y-x\), which is bounded from below by \(\delta\,\|f^{\prime\prime}\|_{L^{\infty}(]x,y[]}\), and from above because \(f^{\prime\prime}\) is in \(L^{p/(2p+1)}(\mathbb{R})\). This proves that \(r(x,y)\) is bounded on \(E_{\delta}\). Thus the only case where \(r\) could be unbounded is when \(x\) and \(y\) get arbitrarily close (\(\delta\) goes to zero). _(iii) Continuity and differentiability of \(c\) on \(E\)._ We now show that \(c\) can be continuously extended to \(E\) and that this extension is differentiable on \(E\). To do so, we prove that for all element \(x\in I\), \(\lim_{y\to x}c(x,y)=x\). One can prove that for all \(y\in I\), \(\lim_{x\to y}c(x,y)=y\) using a similar argument. Let \(\varepsilon\leq y-x\), so that \(x+\varepsilon\in I\). We start by rearranging the terms in the expression of \(c(x,x+\varepsilon)\) and obtain the following \[c(x,x+\varepsilon)=x+\frac{f^{\prime}(x+\varepsilon)-\varepsilon^{-1}(f(x+ \varepsilon)-f(x))}{f^{\prime}(x+\varepsilon)-f^{\prime}(x)}\varepsilon.\] Now, let \(k\geq 2\) be the smallest integer such that the \(k\)-th derivative of \(f\) at \(x\) is not zero. By definition of a compatible interval, \(k\) will be equal to \(2\) for all interior points of \(I\). Since \(f\) is analytical in the neighbourhood of the zeros of \(f^{\prime\prime}\), there also exists such a \(k>2\) for the ends of \(I\). We can always take \(\varepsilon\) as small as needed so that \(f\) is analytical around \(x\) in a neighbourhood of radius \(\varepsilon\). We obtain a Taylor expansion of the fraction above by expanding its numerator and denominator to order \(k-1\) around \(x\), expressed at \(x+\varepsilon\), and arrive at \[c(x,x+\varepsilon)=x+\frac{k-1}{k}\varepsilon+O(\varepsilon^{2}).\] This shows that the function \(\hat{c}\) defined by \(\hat{c}(x,y)=c(x,y)\) if \(x<y\) and \(\hat{c}(x,x)=x\) is continuous and differentiable on \(E\). _(iv) Continuity of \(r\) on \(E\)._ Let \(x\in I\). We show that \(\varepsilon\mapsto r(x,x+\varepsilon)\) has a finite limit when \(\varepsilon\) goes to zero by finding an equivalent of the numerator and the denominator of \(r\). We write the numerator as \((I_{1}(\varepsilon)^{p}+I_{2}(\varepsilon)^{p})^{1/p}\), where \(I_{1}\) and \(I_{2}\) are the integrals on \([x,c(x,x+\varepsilon)]\) and \([c(x,x+\varepsilon),x+\varepsilon]\) respectively. The denominator of \(r\) is written \(I_{3}(\varepsilon)\). Here again, let \(k\geq 2\) be the smallest integer such that the \(k\)-th derivative of \(f\) at \(x\) is not zero. We use the mean-value form of the remainder in the \(k\)-th-order Taylor expansion of \(f\) to obtain that for all \(t\in\left]x,c(x,x+\varepsilon)\right[\), there exists \(\eta_{t}\in[x,t]\) such that \(f(t)-T[f,x](t)=\frac{1}{k!}f^{(k)}(\eta_{t})(t-x)^{k}\). Then, owing to the mean-value theorem for integrals since \(f^{(k)}\) is continuous, there exists \(\eta\in\left]x,c(x,x+\varepsilon)\right[\) such that \[I_{1}(\varepsilon)^{p}=\int_{x}^{c(x,x+\varepsilon)}\frac{1}{k!}|f^{(k)}(\eta _{t})(t-x)^{k}|^{p}\,\mathrm{d}t=\frac{1}{k!^{p}(kp+1)}|f^{(k)}(\eta)|^{p}(c(x,x+\varepsilon)-x)^{kp+1}.\] We can now use the previous Taylor expansion of \(c\) around \((x,x)\) and also expand \(f^{(k)}(\eta)\) around \(x\) since \(\eta\) goes to \(x\) as \(\varepsilon\) approaches zero. We finally obtain the following expansion for \(I_{1}(\varepsilon)^{p}\) \[I_{1}(\varepsilon)^{p}=\frac{1}{k!^{p}(kp+1)}\left(\frac{k-1}{k}\right)^{kp+1 }|f^{(k)}(x)|^{p}\varepsilon^{kp+1}+o(\varepsilon^{kp+1}).\] We obtain a \(k\)-th order Taylor expansion of \(f-T[f,x+\varepsilon]\) as follows: there exist \(\eta_{1},\eta_{2}\in\left]x,x+\varepsilon\right[\), such that for all \(t\in\left]c(x,x+\varepsilon),x+\varepsilon\right[\), there exists \(\eta_{t}\) between \(x\) and \(t\) such that \[f(t)-T[f,x+\varepsilon](t)=\frac{1}{k!}\left[f^{(k)}(\eta_{t})(t-x)^{k}-kf^{( k)}(\eta_{2})\varepsilon^{k-1}(t-x)+kf^{(k)}(\eta_{2})\varepsilon^{k}-f^{(k)}( \eta_{1})\varepsilon^{k}\right]+o(\varepsilon^{k}).\] We observe that \(\eta_{1}\), \(\eta_{2}\) and \(\eta_{t}\) go to \(x\) as \(\varepsilon\) approaches zero. This means that the Taylor expansion of \(f^{(k)}(\eta_{t})\) is \(f^{(k)}(x)+O(\varepsilon)\), and similarly at \(\eta_{1}\) and \(\eta_{2}\). Besides, using the expansion of \(c(x,x+\varepsilon)\), we show that \(x+\frac{k-1}{k}\varepsilon+o(\varepsilon)\leq t\leq x+\varepsilon\), which means that \(t-x\) is of the order of \(\varepsilon\). After keeping the terms of order \(\varepsilon^{k}\), factoring by \(f^{(k)}(x)\varepsilon^{k}\) and introducing the change of variable \(u=\frac{t-x}{\varepsilon}\), we obtain \[d_{2}(t)=\frac{f^{(k)}(x)}{k!}\left[u^{k}-ku+k-1\right]\varepsilon^{k}+o( \varepsilon^{k}).\] The bounds of the integral become \(\frac{k-1}{k}+o(1)\) and \(1\), and we decompose the integration domain into \(\left[\frac{k-1}{k},1\right]\) and \(\left[\frac{k-1}{k}+o(1),\frac{k-1}{k}\right]\). The second integral is going to be negligible against the first because its integrand is bounded and its integration domain has a length of \(o(1)\). Putting everything together, we obtain the following expansion for \(I_{2}(\varepsilon)^{p}\) \[I_{2}(\varepsilon)^{p}=\frac{1}{k!^{p}}\left(\int_{\frac{k-1}{k}}^{1}\left|u^{ k}-1-k(u-1)\right|^{p}\,\mathrm{d}u\right)|f^{(k)}(x)|^{p}\varepsilon^{kp+1}+o( \varepsilon^{kp+1}).\] Using a similar argument, the Taylor expansion of the integrand of the denominator of \(r\) is \(\frac{1}{(k-2)!}f^{(k)}(x)(t-x)^{k-2}+o((t-x)^{k-2})\) and taking the \(L^{p/(2p+1)}\) norm of this relationship, we find that the expansion of \(I_{3}(\varepsilon)^{p}\) is \[I_{3}(\varepsilon)^{p}=\frac{1}{(k-2)!^{p}}\left(\frac{2p+1}{kp+1}\right)^{2p+1 }|f^{(k)}(x)|^{p}\varepsilon^{kp+1}+o(\varepsilon^{kp+1}).\] Altogether, in the expression of \(r(x,x+\varepsilon)^{p}\), we can factor the numerator and the denominator by \(|f^{(k)}(x)|^{p}\varepsilon^{kp+1}\) and taking the limit as \(\varepsilon\) goes to zero, we obtain \[\lim_{\varepsilon\to 0}r(x,x+\varepsilon)^{p}=\frac{1}{(k(k-1))^{p}}\left(\frac{kp+1}{2p +1}\right)^{2p+1}\left(\frac{1}{kp+1}\left(\frac{k-1}{k}\right)^{kp+1}+\int_{ \frac{k-1}{k}}^{1}\left|u^{k}-1-k(u-1)\right|^{p}\,\mathrm{d}u\right).\] This shows that the function \(\hat{r}\) defined by \(\hat{r}(x,y)=r(x,y)\) if \(x<y\) and \(\hat{r}(x,x)\) as the quantity above is continuous on \(E\) as a whole. Besides \(\hat{r}\) has finite limits on \(\partial E\), so we conclude that it is bounded in \(E\). _(v) Proof of the other inequalities_. The same proof can be adapted for the bound on the derivatives, and the bound in \(L^{\infty}\) norms. The arguments are the same everywhere, except for the boundedness around the points of the type \((x,x)\), where other equivalents are found. It is still possible to define the corresponding function \(r\) on \(E\) as a whole and to show that it is bounded in \(E\). ### Proof of Proposition 1 Proof.: This proof is inspired by the heuristics given in [18]. We keep the notations of the definition in Subsect. 4.2. Let \(I\) be a compatible interval of \(f\). If \(f\) is linear on \(I\), then \(f\) coincides with its tangent at either end of \(I\) so it is enough to define the restriction of \(\pi_{n,p}[f]\) to \(I\) as this tangent. We thus decompose the total squared \(L^{p}\) norm on the \(\kappa(f)\) compatible intervals of \(f\) where \(f\) is not linear. We decide to place \(n^{\prime}=\lfloor n/\kappa(f)\rfloor\) points in each interval. Let \(I=[a,b]\) be a compatible interval for \(f\) where \(f\) is not linear. We write \(a<\xi_{1}<\ldots<\xi_{n^{\prime}}<b\) the free points in \(I\). For convenience we introduce \(\xi_{0}=a\) and \(\xi_{n^{\prime}+1}=b\). We split the integral on \(I\) on the intervals \([\xi_{i},\xi_{i+1}]\) and using Eq. (3) from Lem. 1, we readily have \[\|f-\pi_{n,p}[f]\|_{L^{p}(I)}^{p}=\sum_{i=0}^{n^{\prime}}\|f-T[f,\xi_{i},\xi_{i +1}]\|_{L^{p}([\xi_{i},\xi_{i+1}])}^{p}\leq A_{p}(f)^{p}\sum_{i=0}^{n^{\prime} }\left\|f^{\prime\prime}\right\|_{L^{p/(2p+1)}([\xi_{i},\xi_{i+1}])}^{2p+1}.\] We now define the \((\xi_{i})_{1\leq i\leq n^{\prime}}\) such that \(\left\|f^{\prime\prime}\right\|_{L^{p/(2p+1)}([\xi_{i},\xi_{i+1}])}^{p/(2p+1)}\) are all equal to the same constant that we write \(C_{I}\). By the linearity of the integral, we find that \((n^{\prime}+1)C_{I}=\left\|f^{\prime\prime}\right\|_{L^{p/(2p+1)}(I)}^{p/(2p+1 )}\) and \(\xi_{i}\) is such that \[\int_{\xi_{0}}^{\xi_{i}}\left|f^{\prime\prime}(t)\right|^{p/(2p+1)}\,\mathrm{d }t=iC_{I}=\frac{i}{n^{\prime}+1}\int_{I}\left|f^{\prime\prime}(t)\right|^{p/(2p +1)}\,\mathrm{d}t.\] The sum above evaluates to \((n^{\prime}+1)C_{I}^{2p+1}=(n^{\prime}+1)^{-2p}\left\|f^{\prime\prime}\right\| _{L^{p/(2p+1)}(I)}^{p}\). We apply the same method in all the compatible intervals for \(f\) that we write \(I_{k}\) for \(1\leq k\leq\kappa(f)\). Since \(2/5\leq p/(2p+1)\leq 1/2\) and \(f^{\prime\prime}\in L^{q}\) for all \(q\geq 2/5\), we bound the \(L^{p/(2p+1)}\) norm of \(f^{\prime\prime}\) on each \(I_{k}\) by that on \(\mathbb{R}\) as a whole. We finally obtain \[\|f-\pi_{n,p}[f]\|_{L^{p}(\mathbb{R})}^{p} =\sum_{k=1}^{\kappa(f)}\|f-\pi_{n,p}[f]\|_{L^{p}(I_{k})}^{p}\leq \sum_{k=1}^{\kappa(f)}A_{p}(f)^{p}(n^{\prime}+1)C_{I_{k}}^{2p+1}\] \[\leq\kappa(f)\frac{A_{p}(f)^{p}\left\|f^{\prime\prime}\right\|_{L ^{p/(2p+1)}(\mathbb{R})}^{p}}{(n^{\prime}+1)^{2p}}\leq\frac{A_{p}(f)^{p} \kappa(f)^{2p+1}\left\|f^{\prime\prime}\right\|_{L^{p/(2p+1)}(\mathbb{R})}^{p }}{n^{2p}},\] where we obtained the last inequality by multiplying the numerator and denominator by \(\kappa(f)^{2p}\) and used the fact that for all \(m,n>0\), \(m\left\lfloor n/m\right\rfloor\geq n-m\). We apply the same method to \(f^{\prime}\) using Eq. (3) from Lem. 1. In each \(I_{k}\), we sample the \((\xi_{i})_{1\leq i\leq n^{\prime}}\) such that \(\int_{\xi_{0}}^{\xi_{i}}\left|f^{\prime\prime}(t)\right|^{p/(p+1)}\,\mathrm{d}t =iD_{I_{k}}\) where \((n^{\prime}+1)D_{I_{k}}=\left\|f^{\prime\prime}\right\|_{L^{p/(p+1)}(I_{k})}^{p/ (p+1)}\). Then it holds \[\|f-\pi_{n,p}[f]\|_{L^{p}(\mathbb{R})}^{p}\leq\sum_{k=1}^{\kappa(f)}B_{p}(f)^{ p}(n^{\prime}+1)D_{I_{k}}^{p+1}\leq\frac{B_{p}(f)^{p}\kappa(f)^{p+1}\left\|f^{ \prime\prime}\right\|_{L^{p/(p+1)}(\mathbb{R})}^{p}}{n^{p}}.\] The same bounds are obtained for the \(L^{\infty}\) norms, by considering the maximum \(L^{\infty}\) norm over the \(n^{\prime}+1\) subsegments and sampling the \((\xi_{i})\) according to \(\int_{\xi_{0}}^{\xi_{i}}\left|f^{\prime\prime}\right|^{1/2}\) for the first bound and \(\int_{\xi_{0}}^{\xi_{i}}\left|f^{\prime\prime}\right|\) for the second bound. ### Proof of Lemma 2 Proof.: We know that the cells in \(\tau_{\mathbb{R}^{d}}(\boldsymbol{\Theta})\) are all convex since they are defined as the interior of intersections of half-planes. The final mesh \(\tau_{\Omega}(\boldsymbol{\Theta})\) is the intersection of \(\tau_{\mathbb{R}^{d}}(\boldsymbol{\Theta})\) against \(\Omega\). Let \(K\in\tau_{\Omega}(\boldsymbol{\Theta})\). There exists \(K^{\prime}\in\tau_{\mathbb{R}^{d}}(\boldsymbol{\Theta})\) such that \(K=K^{\prime}\cap\Omega\). If \(K^{\prime}\subseteq\Omega\), then \(K=K^{\prime}\) is convex. By the contraposition principle, if \(K\) is not convex, then \(K^{\prime}\nsubseteq\Omega\). This shows that if \(K\) is a non-convex cell in \(\tau_{\Omega}(\boldsymbol{\Theta})\), then \(K\) intersects with both \(\Omega\) and \(\mathbb{R}^{d}\backslash\Omega\), which means that \(K\) intersects with the boundary of \(\Omega\). In the case where \(\Omega\) is convex, the intersection of two convex sets being convex implies that \(K=K^{\prime}\cap\Omega\) is always convex. ### Proof of Proposition 2 Proof.: We show the two bounds Eq. (5) and Eq. (6) on a single-layer network, and extend them to arbitrary NNs by induction. We also prove that \(\left\|\nabla u\right\|_{L^{\infty}(\Omega)}\leq C_{u}\left\|f^{\prime}\right\|_ {L^{\infty}(\mathbb{R})}^{\alpha}\) and \(\left\|\nabla v\right\|_{L^{\infty}(\Omega)}\leq C_{v}\left\|g^{\prime}\right\| _{L^{\infty}(\mathbb{R})}^{\alpha}\) for some constants \(C_{u}\) and \(C_{v}\) that only depend on \(\boldsymbol{\Theta}\), and some integer \(\alpha\). To simplify notations, we write \(\left\|\cdot\right\|_{\Omega}\) instead of \(\left\|\cdot\right\|_{L^{\infty}(\Omega)}\) and similarly \(\left\|\cdot\right\|_{\mathbb{R}}\) instead of \(\left\|\cdot\right\|_{L^{\infty}(\mathbb{R})}\). For an affine map \(\boldsymbol{\Theta}\) associated with a matrix \(\boldsymbol{W}\), we write \(\left\|\boldsymbol{\Theta}\right\|_{\infty}=\max_{i,j}\left|\boldsymbol{W}_{i,j}\right|\). In this proof, we write \(C_{1}\), \(C_{2}\), \(C_{3}\), \(C_{u}\), \(C_{v}\) and \(\alpha\) the constants in the hypothesis of the induction and write the new constants after the induction with a prime symbol. _(i) Initialisation._ We take an affine map \(\boldsymbol{\Theta}\) from \(\mathbb{R}^{d}\) to \(\mathbb{R}^{m}\) associated with a weight matrix \(\boldsymbol{W}\in\mathbb{R}^{m\times d}\) and we consider the functions \(u=f\circ\boldsymbol{\Theta}\) and \(v=g\circ\boldsymbol{\Theta}\). The partial derivatives of \(u\) are given by \(\partial_{j}u_{i}=\boldsymbol{W}_{i,j}f^{\prime}\circ\boldsymbol{\Theta}_{i}\). We readily obtain that \(\left\|u_{i}-v_{i}\right\|_{\Omega}\leq\left\|f-g\right\|_{\mathbb{R}}\) and \(\left\|\partial_{j}u_{i}-\partial_{j}v_{i}\right\|_{\Omega}\leq\left|\boldsymbol {W}_{i,j}\right|\left\|f^{\prime}-g^{\prime}\right\|_{\mathbb{R}}\). We also notice that \(\partial_{j}u_{i}\) is bounded by \(\left|\boldsymbol{W}_{i,j}\right|\left\|f^{\prime}\right\|_{\mathbb{R}}\). Similarly, \(\partial_{j}v_{i}\) is bounded by an analogous constant. We conclude by taking the maximum on \(1\leq i\leq m\) and \(1\leq j\leq d\). We can take \(C_{1}=1\), \(C_{2}\geq 0\), \(C_{3}=C_{u}=C_{v}=\left\|\boldsymbol{\Theta}\right\|_{\infty}\) and \(\alpha=1\). _(ii) Induction with affine map._ Let \(u\) and \(v\) be two functions from \(\mathbb{R}^{d}\) to \(\mathbb{R}^{n}\) such that the two bounds hold and \(\nabla u\) and \(\nabla v\) are bounded in \(\Omega\). Let \(\boldsymbol{\Theta}\) be an affine map from \(\mathbb{R}^{n}\) to \(\mathbb{R}^{m}\). We want to show that the two bounds still hold for \(\boldsymbol{\Theta}\circ u-\boldsymbol{\Theta}\circ v\). We use the triangle inequality and the induction hypothesis to obtain that \[\left\|(\boldsymbol{\Theta}\circ u)_{i}-(\boldsymbol{\Theta}\circ v)_{i} \right\|_{\Omega}\leq\sum_{k=1}^{n}\left|\boldsymbol{W}_{i,k}\right|\left\|u_ {k}-v_{k}\right\|_{\Omega}\leq nC_{1}\left\|\boldsymbol{\Theta}\right\|_{ \infty}\left\|f-g\right\|_{\mathbb{R}}.\] The partial derivatives of \(\boldsymbol{\Theta}\circ u\) are given by \(\partial_{j}(\boldsymbol{\Theta}\circ u)_{i}=\sum_{k=1}^{n}\boldsymbol{W}_{i,k}\partial_{j}u_{k}\). Using the triangle inequality and the induction hypothesis, we arrive at \[\left\|\partial_{j}(\boldsymbol{\Theta}\circ u)_{i}-\partial_{j}(\boldsymbol {\Theta}\circ v)_{i}\right\|_{\Omega}\leq\sum_{k=1}^{n}\left|\boldsymbol{W}_{i,k}\right|\left\|\partial_{j}u_{k}-\partial_{j}v_{k}\right\|_{\Omega}\leq n \left\|\boldsymbol{\Theta}\right\|_{\infty}(C_{1}\left\|f-g\right\|_{ \mathbb{R}}+C_{2}\left\|f^{\prime}-g^{\prime}\right\|_{\mathbb{R}}).\] We verify that \(\left\|\partial_{j}(\boldsymbol{\Theta}\circ u)_{i}\right\|_{\Omega}\leq C_{u }\sum_{k=1}^{n}\left|\boldsymbol{W}_{i,k}\right|\left\|f^{\prime}\right\|_{ \mathbb{R}}^{\alpha}\) and the same argument shows that \(\partial_{j}v_{i}\) is bounded by a similar constant. We conclude by taking the maximum on \(1\leq i\leq m\) and \(1\leq j\leq d\). We can take \(C_{1}^{\prime}=C_{2}^{\prime}=nC_{1}\left\|\boldsymbol{\Theta}\right\|_{ \infty}\), \(C_{3}^{\prime}=nC_{2}\left\|\boldsymbol{\Theta}\right\|_{\infty}\), \(C_{u}^{\prime}=nC_{u}\left\|\boldsymbol{\Theta}\right\|_{\infty}\), \(C_{v}^{\prime}=nC_{v}\left\|\boldsymbol{\Theta}\right\|_{\infty}\) and \(\alpha^{\prime}=\alpha\). _(iii) Induction with activation._ Let \(u\) and \(v\) be two functions from \(\mathbb{R}^{d}\) to \(\mathbb{R}^{n}\) such that the two bounds hold and \(\nabla u\) and \(\nabla v\) are bounded in \(\Omega\). We want to show that the two bounds still hold for \(f\circ u-g\circ v\). Using the triangle inequality, the fact that \(f\) is Lipschitz and the induction hypothesis, we obtain \[\left\|(f\circ u)_{i}-(g\circ v)_{i}\right\|_{\Omega} \leq\left\|(f\circ u)_{i}-(f\circ v)_{i}\right\|_{\Omega}+\left\| (f\circ v)_{i}-(g\circ v)_{i}\right\|_{\Omega}\] \[\leq\left(C_{1}\operatorname{Lip}(f)+1\right)\left\|f-g\right\|_{ \mathbb{R}}.\] The partial derivatives of \(f\circ u\) are given by \(\partial_{j}(f\circ u)_{i}=(f^{\prime}\circ u_{i})\partial_{j}u_{i}\). The triangle inequality leads to \[\left\|\partial_{j}(f\circ u)_{i}-\partial_{j}(g\circ v)_{i}\right\| _{\Omega} \leq\left\|(f^{\prime}\circ u_{i})\partial_{j}u_{i}-(f^{\prime} \circ u_{i})\partial_{j}v_{i}\right\|_{\Omega}+\left\|(f^{\prime}\circ u_{i}) \partial_{j}v_{i}-(f^{\prime}\circ v_{i})\partial_{j}v_{i}\right\|_{\Omega}\] \[+\left\|(f^{\prime}\circ v_{i})\partial_{j}v_{i}-(g^{\prime}\circ v _{i})\partial_{j}v_{i}\right\|_{\Omega}.\] We bound the first term by factorising by \(f^{\prime}\circ u_{i}\) and the induction hypothesis. We use the fact that \(f^{\prime}\) is Lipschitz and the induction hypothesis to bound the second term. The third term is readily bounded by \(\left\|\nabla v\right\|_{\Omega}\left\|f^{\prime}-g^{\prime}\right\|_{\mathbb{R}}\). Altogether, we obtain the following bound \[\left\|\partial_{j}(f\circ u)_{i}-\partial_{j}(g\circ v)_{i}\right\| _{\Omega} \leq\left(C_{2}\left\|f^{\prime}\right\|_{\mathbb{R}}+C_{1}C_{v} \operatorname{Lip}(f^{\prime})\left\|g^{\prime}\right\|_{\mathbb{R}}^{\alpha} \right)\left\|f-g\right\|_{\mathbb{R}}\] \[+\left(C_{3}\left\|f^{\prime}\right\|_{\mathbb{R}}+C_{v}\left\|g^{ \prime}\right\|_{\mathbb{R}}^{\alpha}\right)\left\|f^{\prime}-g^{\prime}\right\|_{ \mathbb{R}}.\] We observe that \(\left\|\partial_{j}u_{i}\right\|_{\Omega}\leq C_{u}\left\|f^{\prime}\right\|_{ \mathbb{R}}^{\alpha+1}\) and \(\partial_{j}v_{i}\) is bounded by a similar constant. We conclude by taking the maximum on \(1\leq i\leq m\) and \(1\leq j\leq d\). We can take \(C_{1}^{\prime}=C_{1}\operatorname{Lip}(f)+1\), \(C_{2}^{\prime}=C_{2}\left\|f^{\prime}\right\|_{\mathbb{R}}+C_{1}C_{v} \operatorname{Lip}(f^{\prime})\left\|g^{\prime}\right\|_{\mathbb{R}}^{\alpha}\), \(C_{3}^{\prime}=C_{3}\left\|f^{\prime}\right\|_{\mathbb{R}}+C_{v}\left\|g^{ \prime}\right\|_{\mathbb{R}}^{\alpha}\), \(C_{u}^{\prime}=C_{u}\), \(C_{v}^{\prime}=C_{v}\) and \(\alpha^{\prime}=\alpha+1\). ## Appendix B Algorithms ### Mesh extraction In this section, we provide more detail on the way to construct an adapted mesh to a neural network based on a CPWL approximation of its activation function, from \(\mathbb{R}\) to \(\mathbb{R}\) that we apply element-wise on the coordinates of \(\pi[u_{k}]_{|R}\), and let \((\xi_{j})_{1\leq j\leq K}\) denote the breakpoints of \(\pi[\rho]\). We also introduce the local coefficients \((\alpha_{j},\beta_{j})\) of \(\pi[\rho]\) such that the restriction of \(\pi[\rho]\) to \([\xi_{j},\xi_{j+1}]\) has the expression \(x\mapsto\alpha_{j}x+\beta_{j}\). To identify the regions where \(\pi[\rho]\circ\pi[u_{k}]_{|R}\) is linear, we need to consider the equations \(\boldsymbol{W}_{R,i}\boldsymbol{x}+\boldsymbol{b}_{R,i}=\xi_{j}\) for all \(1\leq i\leq m\) and \(1\leq j\leq K\). These hyperplanes intersect in \(R\) and define subregions adapted to \(\pi[\rho]\circ\pi[u_{k}]_{|R}\). This is the step that corresponds to cutRegion in Alg. 1. We now present specific algorithms to define these subregions in dimensions zero, one and two. #### b.1.1. Dimensions zero and one In dimension zero, a region is reduced to one single point so nothing needs to be done. In the one-dimensional case, \(R\) is a segment that we write \([p_{-},p_{+}]\). We first compute the range of each linear map \(h_{i}:x\mapsto w_{i}x+b_{i}\) on \(R\) by computing their values at \(p_{-}\) and \(p_{+}\). We then find which breakpoints \(\xi_{j}\) belong to that range via a binary search, and we compute the corresponding antecedents \(p_{i,j}\) of \(\xi_{j}\) by \(h_{i}\) with normalised coordinates as follows \[p_{i,j}=\frac{1}{2}(p_{-}+p_{+})+\frac{t_{i,j}}{2}(p_{+}-p_{-}),\qquad t_{i,j} =\frac{2\xi_{j}-(h_{i}(p_{+})+h_{i}(p_{-}))}{h_{i}(p_{+})-h_{i}(p_{-})}\in[-1,+ 1]\,.\] We found this formulation more numerically stable compared to expressing \(p_{i,j}\) in terms of \(w_{i}\), \(b_{i}\) and \(\xi_{j}\). We sort the normalised coordinates \(t_{i,j}\) and obtain the corresponding sorted abscissae \((p_{l})\). For each segment \([p_{l},p_{l+1}]\), we evaluate the linear maps \(h_{i}\) at \(\frac{1}{2}(p_{l}+p_{l+1})\) and retrieve the coefficients \((\alpha_{l,i},\beta_{l,i})\) of \(\pi[\rho]\) at this location to update the local expression \((\boldsymbol{W}_{R},\boldsymbol{b}_{R})\) into \((\mathrm{diag}(\boldsymbol{\alpha}_{l})\boldsymbol{W}_{R},\mathrm{diag}( \boldsymbol{\alpha}_{l})\boldsymbol{b}_{R}+\boldsymbol{\beta}_{l})\). #### b.1.2. Dimension two The two-dimensional case is much more involved as the hyperplanes are lines that can intersect with the boundary of \(R\) and among one another. We consider four steps. _(i) Intersection of the hyperplanes with \(\partial R\)_ Since \(R\) is a convex polygon, each hyperplane \(h_{i}:\boldsymbol{x}\mapsto\boldsymbol{w}_{i}\boldsymbol{x}+b_{i}\) can intersect with \(\partial R\) at two points at most. If the number of intersections is zero or one, then \(R\) lies entirely on one side of the hyperplane thus we only need to handle the case when there are two intersection points. One notable corner case occurs when the hyperplane coincides with an edge of the polygon. In this degenerate case, the region is not cut. The clipping of a line by a convex polygon can be done in logarithmic time, using a method akin to binary search. We refer the reader to [25] for the full algorithm. We write \((P_{i,j}^{-},P_{i,j}^{+})\) the clipping of \(h_{i}\) by \(R\), and \((P_{l})\) the collection of these vertices. _(ii) Intersection of the hyperplanes among themselves._ Next, we check the intersection of pairwise combinations of the lines obtained above. It is important to note that for a fixed row \(i\), the lines \((P_{i,j}^{-},P_{i,j}^{+})\) are parallel. Indeed, they are solutions to \(\boldsymbol{w}_{i}\boldsymbol{x}+b_{i}=\xi\) for two different values of \(\xi\). This means that we can skip the tests corresponding to lines arising from the same row. Let \(i_{1}\neq i_{2}\) be two different row indices, \(j_{1}\) and \(j_{2}\) be two breakpoint indices, and suppose that \((P_{i_{1},j_{1}}^{-},P_{i_{1},j_{1}}^{+})\) and \((P_{i_{2},j_{2}}^{-},P_{i_{2},j_{2}}^{+})\) are two non-parallel lines. We compute their intersection \(Q\) via the following parametrisation and solve for \(s\) and \(t\): \[Q=\overline{P}_{i_{1},j_{1}}+s\Delta P_{i_{1},j_{1}}=\overline{P}_{i_{2},j_{2} }+t\Delta P_{i_{2},j_{2}},\qquad\left[\Delta P_{i_{1},j_{1}}\quad-\Delta P_{i_{2 },j_{2}}\right]\begin{bmatrix}s\\ t\end{bmatrix}=\left[\overline{P}_{1}-\overline{P}_{2}\right],\] where \(\overline{P}_{i,j}=\frac{1}{2}(P_{i,j}^{-}+P_{i,j}^{+})\) is the middle of the segment and \(\Delta P_{i,j}=\frac{1}{2}(P_{i,j}^{+}-P_{i,j}^{-})\) is half the direction vector. We need to check that \(s\) and \(t\) are between \(-1\) and \(+1\). _(iii) Representation of these intersections as an adjacency graph._ We initialise a connectivity graph with the edges of the region \(R\). During step _(i)_, for each line \((P_{i,j}^{-},P_{i,j}^{+})\) that cuts the polygon, we add the points \(P_{i,j}^{-}\) and \(P_{i,j}^{+}\) to the edges they belong to, and add an arc in the graph, that connects these two points. During step _(ii)_, we only need to add new vertices to the graph. _(iv) Extraction of the smallest cycles in this graph._ Finding the partition of \(R\) where the composition by \(\pi[\rho]\) is CPWL amounts to finding the fundamental cycle basis of the undirected graph defined above. We refer to [26] for a definition of the fundamental cycle basis of a planar graph and for an algorithm to build it. #### b.1.3. Notes on the algorithm In Alg. 1, \(\mathtt{weight}(\ell)\) and \(\mathtt{bias}(\ell)\) denote the parameters of the network at layer \(\ell\). The instruction coefficients is just a lookup operation to retrieve the local coefficients of \(\pi[\rho]\). Here its output are vectors because the local map on a region is vector-valued. Finally if \(\boldsymbol{\alpha}\) is a vector, \(\mathtt{diag}(\boldsymbol{\alpha})\) is the diagonal matrix whose diagonal coefficients are \(\boldsymbol{\alpha}\). ### Decomposition of convex polygons In the general two-dimensional case, the cells of the mesh adapted to \(u\) are arbitrary convex polygons. Numerical quadrature rules are known for triangles and convex quadrangles, so we decide to split the cells into a collection of these two reference shapes. We encode hard rules to decompose convex polygons with up to ten vertices into triangles and convex quadrangles. For example, an octagon is split into one triangle and two quadrangles by considering the three cycles \((1,2,3,4)\), \((4,5,6,7)\), \((7,8,1,4)\). Larger polygons are recursively split in half until they have fewer than ten vertices.
2310.09612
Deep Neural Networks Can Learn Generalizable Same-Different Visual Relations
Although deep neural networks can achieve human-level performance on many object recognition benchmarks, prior work suggests that these same models fail to learn simple abstract relations, such as determining whether two objects are the same or different. Much of this prior work focuses on training convolutional neural networks to classify images of two same or two different abstract shapes, testing generalization on within-distribution stimuli. In this article, we comprehensively study whether deep neural networks can acquire and generalize same-different relations both within and out-of-distribution using a variety of architectures, forms of pretraining, and fine-tuning datasets. We find that certain pretrained transformers can learn a same-different relation that generalizes with near perfect accuracy to out-of-distribution stimuli. Furthermore, we find that fine-tuning on abstract shapes that lack texture or color provides the strongest out-of-distribution generalization. Our results suggest that, with the right approach, deep neural networks can learn generalizable same-different visual relations.
Alexa R. Tartaglini, Sheridan Feucht, Michael A. Lepori, Wai Keen Vong, Charles Lovering, Brenden M. Lake, Ellie Pavlick
2023-10-14T16:28:57Z
http://arxiv.org/abs/2310.09612v1
# Deep Neural Networks Can Learn Generalizable Same-Different Visual Relations ###### Abstract Although deep neural networks can achieve human-level performance on many object recognition benchmarks, prior work suggests that these same models fail to learn simple abstract relations, such as determining whether two objects are the _same_ or _different_. Much of this prior work focuses on training convolutional neural networks to classify images of two same or two different abstract shapes, testing generalization on within-distribution stimuli. In this article, we comprehensively study whether deep neural networks can acquire and generalize same-different relations both within and out-of-distribution using a variety of architectures, forms of pretraining, and fine-tuning datasets. We find that certain pretrained transformers can learn a same-different relation that generalizes with near perfect accuracy to out-of-distribution stimuli. Furthermore, we find that fine-tuning on abstract shapes that lack texture or color provides the strongest out-of-distribution generalization. Our results suggest that, with the right approach, deep neural networks can learn generalizable same-different visual relations. ## 1 Introduction Humans and a wide variety of non-human animals can easily recognize whether two objects are the same as each other or whether they are different (see Figure 1; Martinho III & Kacelnik, 2016; Christie, 2021; Gentner et al., 2021; Hespos et al., 2021). The abstract concept of equality is simple--even 3-month-old infants (Anderson et al., 2018) and honeybees (Giurfa, 2021) can learn to distinguish between displays of two same or two different objects. Some researchers have even argued that it serves amongst a number of other basic logical operations as a foundation for higher-order cognition and reasoning (Gentner & Goldin-Meadow, 2003; Gentner & Hoyos, 2017). However, in contrast to humans and animals, recent work has argued that deep neural networks struggle to learn this simple relation (Elis et al., 2015; Gulcehre & Bengio, 2016; Stabinger et al., 2016; Kim et al., 2018; Webb et al., 2020; Puebla & Bowers, 2022). This difficulty is surprising given that deep neural networks achieve human or superhuman performance on a wide range of seemingly more complex visual tasks, such as image classification (Krizhevsky et al., 2012; He et al., 2016), segmentation (Long et al., 2015), and generation (Ramesh et al., 2022). Figure 1: **Same or different?** For humans and a number of animal species, it is trivial to recognize that the image on the left contains two of the same objects, while the image on the right contains two different objects. Surprisingly, prior research has suggested that deep neural networks struggle to learn to discriminate between these images. Past attempts to evaluate same-different relations in neural networks have generally used the following methodology. Models are trained to classify images containing either two of the same or two different abstract objects, such as those in Figure 1. A model is considered successful if it is then able to generalize the same-different relation to unseen shapes after training. Convolutional neural networks (CNNs) trained from scratch fail to learn a generalizable relation, and tend to memorize training examples (Kim et al., 2018; Webb et al., 2020). However, deep neural networks have been shown to successfully generalize the same-different relation in certain contexts. This generalization is either limited to in-domain test stimuli (Funke et al., 2021; Puebla and Bowers, 2022) or requires architectural modifications that build in an inductive bias towards relational tasks at the expense of other visual tasks (Kim et al., 2018; Webb et al., 2020; 2023a,b; Kerg et al., 2022; Geiger et al., 2023; Altabaa et al., 2023). Given these limited successes, an open question remains: without architectural modifications that restrict model expressivity in general, can standard neural networks learn an abstract same-different relation that generalizes to both in- and out-of-distribution stimuli? Addressing this question requires going beyond past work in a number of ways. First, most previous studies test for in-distribution generalization--that is, they use test stimuli that are visually similar to the training stimuli. We believe that out-of-distribution generalization provides much stronger evidence that a model has learned a genuine abstract relation without relying on spurious features. Second, the existing literature uses training stimuli that demonstrate the same-different relation with either closed curves (as in Figure 1) or simple geometric shapes. It is unclear whether training on these types of objects is the most helpful for learning the relation versus more naturalistic objects that more closely resemble data seen during pretraining. Finally, most prior work focuses on convolutional architectures, but Vision Transformers (ViTs) (Dosovitskiy et al., 2020) adapted from the language domain (Vaswani et al., 2017) have recently emerged as a competitive alternative to CNNs on visual tasks. Self-attention, a key feature of ViTs, may provide an advantage when learning abstract visual relations--indeed, the ability to attend to and relate any part of a stimulus to any other part may be crucial for relational abstraction. In this article, we address these limitations and comprehensively investigate how neural networks learn and generalize the same-different relation from image data. Our main findings are as follows: * Fine-tuning pretrained ResNet and ViT models on the same-different relation enables both architectures to generalize the relation to unseen objects in the same distribution as the fine-tuning set. In particular, CLIP pretraining results in nearly \(100\%\) in-distribution test accuracy for ViT models, and close to that for ResNet models. (Section 3.1) * Under certain conditions, CLIP-pretrained ViTs can reliably generalize the same-different relation to out-of-distribution stimuli with nearly \(100\%\) accuracy. This suggests that these models may acquire a genuinely abstract concept of equality. (Section 3.2) * Different fine-tuning datasets lead to qualitatively different patterns of generalization--fine-tuning on more visually abstract objects (which do not contain color or texture) results in stronger out-of-distribution generalization, whereas fine-tuning on more naturalistic objects fails to generalize. (Section 3.2) * ViTs generally prefer to determine equality between objects by comparing their color or texture, only learning to compare shape when the fine-tuning dataset lacks color and texture information. However, we find that CLIP pretraining helps to mitigate this preference for color and texture. (Section 4) ## 2 Methods We operationalize the same-different task consistently with prior work e.g. Fleuret et al. (2011). Models are asked to perform a binary classification task on images containing either two of the same objects or two different objects (see the second and third rows of Figure 2). Models are either trained from scratch or fine-tuned on a version of this task with a particular type of stimuli (see Section 2.1 below). After training or fine-tuning, model weights are frozen, and validation and test accuracy scores are computed on sets of same-versus-different stimuli containing unfamiliar objects. These can be either be the same type of objects that they were trained or fine-tuned on (in-distribution generalization) or different types of objects (out-of-distribution generalization). Thus, in order to attain high validation and test accuracy scores, the model must successfully generalize the learned same-different relation to novel objects. This type of generalization is more challenging than the standard image classification setting because of the abstract nature of what defines the classes--models must learn to attend to the relationship between two objects rather than learn to attend to any particular visual features of those objects in the training data. ### Training and Evaluation Datasets We construct four same-versus-different datasets using four different types of objects (see Figure 2) ranging from abstract shapes to naturalistic images that are more familiar to pretrained models. We use the following objects to create these four datasets: 1. **Squiggles (SQU).** Randomly generated closed shapes following Fleuret et al. (2011).1 Most studies in the machine learning literature on the same-different relation uses this dataset (Kim et al., 2018; Funke et al., 2021; Puebla and Bowers, 2022; Messina et al., 2022). 2. **Alphanumeric (ALPH).** Sampled handwritten characters from the Omniglot dataset (Lake et al., 2015). 3. **Shapes (SHA).** Textured and colored shapes from Tartaglini et al. (2022). Objects that match in shape, texture, and color are considered the same, whereas objects that differ along all three dimensions are considered different. 4. **Naturalistic (NAT).** Photographs of real objects on white backgrounds from Brady et al. (2008). These stimuli are the most similar to the data that the pretrained models see before fine-tuning on the same-different task. Footnote 1: The original method from Fleuret et al. (2011) produces closed contours with lines that are only one pixel thick. For our chosen image and object size, these shapes become very difficult to see. We correct this by using a dilation algorithm to darken and thicken the lines to a width of three pixels. Each stimulus is an image that contains two objects that are either the same or different. We select a total of 1,600 unique objects for each dataset. These objects are split into disjoint sets of 1,200, 300, and 100 to form the training, validation, and test sets respectively. Unless otherwise specified, the training, validation, and test sets each contain 6,400 stimuli: 3,200 same and 3,200 different. To construct a given dataset, we first generate all possible pairs of same or different objects--we consider two objects to be the same if they are the same on a pixel level.2 Next, we randomly select Figure 2: **Example stimuli from all four datasets. Each column represents one of the four same-versus-different datasets as indicated by the label beneath the stimuli. The top row shows an example object that is used to form the stimuli that comprise each dataset, while the second and third rows show an example “same” vs. “different” stimulus, respectively.** a subset of the possible object pairs to create the stimuli such that each unique object is in at least one pair. Each object is resized to 64x64 pixels, and then a pair of these objects is placed over a 224x224 pixel white background in randomly selected, non-overlapping positions. We consider two objects in a specific placement as one unique stimulus--in other words, a given pair of objects may appear in multiple images but in different positions (but with all placements of the same two objects being confined to either the training, validation, or test set). All object pairs appear the same number of times to ensure that each unique object is equally represented. ### Models and Training Details We evaluate one convolutional architecture, **ResNet-50**(He et al., 2016), and one Transformer architecture, **ViT-B/16**(Dosovitskiy et al., 2020). We also evaluate three pretraining procedures: (1) **randomly initialized**, in which all model parameters are randomly initialized (Kaiming normal for ResNet-50 and truncated normal for ViT-B/16) and models are trained from scratch, (2) **ImageNet,** in which models are pretrained in a supervised fashion on a large corpus of images (ImageNet-1k for ResNet-50 and ImageNet-21k for ViT-B/16; Deng et al., 2009) with category labels such as "barn owl" or "airplane," and (3) **CLIP**(Radford et al., 2021), in which models learn an image-text contrastive objective where the cosine similarity between an image embedding and its matching natural language caption embedding is maximized. Unlike ImageNet labels, CLIP captions contain additional linguistic information beyond category information (e.g. "a photo of a barn owl in flight"). To adapt these models for the same-different task, we add a randomly initialized linear classifier on top of the visual backbone of the original architecture. Each model is trained from scratch or fine-tuned for 70 epochs with a batch size of 128, updating all parameters. We use a binary cross-entropy loss. For each architecture and pretraining combination, we perform hyperparameter tuning via grid search over the initial learning rate (1e-4, 1e-5, 1e-6, 1e-7, 1e-8), learning rate scheduler (exponential, ReduceLROnPlateau), and optimizer (SGD, Adam, AdamW). We select the best performing training configuration from the grid search according to in-distribution validation accuracy, and then train a model with those hyperparameters five times with different random seeds. We report the median test results across those five seeds. ## 3 Generalization to Unseen Objects ### In-Distribution Generalization We first measure the performance of each model on test data containing the same types of objects used to train or fine-tune the model; e.g. models fine-tuned on pairs of handwritten characters are then tested on handwritten characters that were not seen during training. We refer to this as the in-distribution performance of the model. The starred (\(*\)) result in Figure 3 shows the in-distribution median test accuracy of randomly-initialized ResNet-50 models trained on the Squiggles dataset, which contains the same type of closed contours used by much of the prior work on the same-different relation (Fleuret et al., 2011; Kim et al., 2018; Funke et al., 2021; Puebla and Bowers, 2022; Messina et al., 2022). Confirming the primary findings from prior work, these models do not attain above chance level test accuracy. The same pattern holds for randomly initialized ViT-B/16 models. However, as the rest of Figure 3 shows, pretrained models exhibit substantially improved in-distribution accuracy compared to randomly initialized models across all four datasets. In particular, models pretrained with CLIP demonstrate the largest improvements, attaining nearly \(100\%\) test accuracy irrespective of fine-tuning dataset. Even without any fine-tuning, CLIP features appear to be highly useful for the same-different task--linear probes trained to do the same-different task using CLIP ViT-B/16 embeddings of stimuli without any fine-tuning achieve between \(80\%\) and \(100\%\) median in-distribution test accuracy depending on the dataset (Table 10, Appendix A.5). Differences in performance can also be observed between architectures, with ViT-B/16 models consistently outperforming ResNet-50 after pretraining.3 Another main finding is that the two visually abstract, shape-based datasets (Squiggles and Alphanumeric) appear to pose more of a challenge to models than the Shapes and Naturalistic datasets--models attain noticeably higher in-distribution accuracy on the latter two across architectures and pretraining methods (although the effect is small for CLIP-pretrained models). This difference may be due to the color and texture information that is available in these datasets, which provides additional dimensions over which objects can be compared. We explore the possibility that some models find it easier to evaluate equality using color and texture information in addition to or instead of shape information in Section 4. ### Out-of-Distribution Generalization The previous section showed that pretrained models can generalize to unseen, in-distribution objects. However, if a model learns a truly abstract notion of same-different, it should generalize the same-different relation to any two objects regardless of their particular visual features. Thus, model performance on stimuli that are substantially different from training stimuli is a stronger measure of abstraction. We therefore measure test accuracy for each model across all four datasets, yielding one in-distribution score and three out-of-distribution (OOD) scores per model. Table 1 shows median test accuracy over five seeds for CLIP-pretrained models; full generalization tables for all pretraining styles and architectures can be found in Appendix A.2. Overall, CLIP ViT-B/16 models fine-tuned on the Squiggles task exhibit the strongest OOD generalization, achieving \(>\)95% median test accuracy on the three out-of-distribution datasets. It is worth noting that both this model and CLIP ResNet-50 fine-tuned on the Alphanumeric task (the model with the second best OOD generalization performance) exhibit some degree of sensitivity to the random seed used during fine-tuning: most random seeds result in nearly \(100\%\) OOD generalization for ViT or \(>\)80% for ResNet across all datasets, while some seeds result in substantially lower performance (1/5 seeds for ViT and 2/5 for ResNet). No other model configurations exhibit this bimodal behavior. See Appendix A.8 for details. As in the previous section, models fine-tuned on objects with visually abstract shape features only (Squiggles and Alphanumeric) behave differently than those fine-tuned on datasets containing objects with shape, color, and texture features (Shapes and Naturalistic). The Squiggles and Alphanumeric models generally attain high OOD test accuracy. On the other hand, models fine-tuned on Figure 3: **In-distribution test accuracy by architecture and pretraining method. Bars show median accuracy over 5 runs (individual points also shown), with the bar color denoting pretraining type and the x-axis denoting the dataset used for fine-tuning. See Section 2.1 for dataset descriptions, Section 2.2 for model details, and Figure 2 for visual examples. The starred (\(*\)) result is a replication of findings from prior work showing that CNNs trained from scratch on stimuli like the images in Figure 1 attain chance-level test accuracy. The double-starred (\(**\)) result mirrors Funke et al. (2021) and Puebla & Bowers (2022), who show that ImageNet-pretrained CNNs attain substantially higher in-distribution test accuracy relative to the same architectures trained from scratch.** the Shapes or Naturalistic datasets generalize well to each other but struggle to generalize to the Squiggles and Alphanumeric tasks. Note that some of this effect can be attributed to miscalibrated bias, but not the entire effect--see Appendix A.3 for details. Another way to understand the generalization pattern in Table 1 is that the more "challenging" a dataset is to generalize the same-different relation to, the more effective it is as a fine-tuning dataset for inducing out-of-distribution generalization. For example, CLIP ViT-B/16 models fine-tuned on datasets other than Squiggles attain a median test accuracy of only \(51.8\%\) on the Squiggles task on average, whereas CLIP ViT-B/16 fine-tuned on Squiggles attains an average OOD test accuracy of \(97.8\%\). On the other hand, the Shapes dataset is easy for models fine-tuned on other datasets to generalize to (\(99.5\%\) accuracy on average), but CLIP ViT fine-tuned on that "easier" dataset attains an average OOD test accuracy of only \(68.5\%\). This pattern of Squiggles being more "difficult" to generalize to persists across architectures and pretraining methods (Appendix A.2). We further study why fine-tuning on different datasets results in different generalization behaviors by computing the average cosine similarity between objects in a given dataset using pretrained CLIP embeddings (Table 2). This value provides information about the visual variation in each dataset through the lens of a specific model: a higher number means that stimuli in that dataset are generally embedded more closely together by that model. Before fine-tuning, pretrained CLIP models embed Squiggles stimuli more closely together than stimuli from other datasets, potentially explaining the difficulty of that dataset as a test of OOD generalization. We also note what seems to be a correlation between "closeness" of stimuli in a model's embedding space and ability of models fine-tuned on that dataset to generalize OOD. Given that random noise is embedded even more closely together than Squiggles stimuli, we fine-tune models on the same-different task comparing patches of random noise and measure their OOD generalization in Appendix A.7. We find that models fine-tuned on noise exhibit weaker OOD generalization than models fine-tuned on Squiggles, indicating that "closeness" of stimuli is not a perfect correlate of OOD generalization and is only part of the story. \begin{table} \begin{tabular}{l c c c c c c c c c c} \hline \hline \multicolumn{1}{c}{} & \multicolumn{4}{c}{**CLIP ResNet-50**} & \multicolumn{4}{c}{**CLIP ViT-B/16**} \\ \hline \multicolumn{1}{c}{} & \multicolumn{4}{c}{\(\leftarrow\)**Test \(\rightarrow\)**} \\ \multicolumn{1}{c}{**Train \(\downarrow\)**} & \multicolumn{1}{c}{SOU} & \multicolumn{1}{c}{ALPH} & SHA & NAT & Avg. & \multicolumn{1}{c}{**Train \(\downarrow\)**} & \multicolumn{1}{c}{SOU} & \multicolumn{1}{c}{ALPH} & SHA & NAT & Avg. \\ \hline SQU & **97.7** & 82.9 & 86.9 & 82.0 & 83.9 & SQU & **99.6** & 97.7 & **99.1** & 96.7 & 97.8 \\ ALPH & **82.1** & **97.4** & 92.8 & 91.8 & 88.9 & ALPH & 55.3 & **99.4** & 99.6 & 91.2 & 82.0 \\ SHA & 56.0 & 78.1 & **98.1** & 96.1 & 76.7 & SHA & 50.0 & 55.4 & **100** & 100 & 68.5 \\ NAT & 50.1 & 59.3 & 93.4 & **97.3** & 67.6 & NAT & 50.0 & 68.0 & 99.8 & **100** & 72.6 \\ \hline Avg. & 62.7 & 73.4 & 91.1 & 90.0 & Avg. & 51.8 & 73.7 & 99.5 & 95.9 & \\ \hline \hline \end{tabular} \end{table} Table 1: **Out-of-distribution (OOD) test accuracy for CLIP models fine-tuned on each dataset.** Rows indicate the dataset that models are fine-tuned on, while columns indicate the test dataset. Each cell is the median performance over five random seeds. The rightmost column labeled “Avg.” is the row-wise average of accuracy scores across OOD test sets (i.e. off-diagonal values), which indicates how well a model fine-tuned on a given dataset is able to generalize to other datasets. The bottom row labeled “Avg.” is the column-wise average across off-diagonal values, indicating how difficult it is for models fine-tuned on other datasets to generalize to the given dataset. Note that the **bolded** diagonals are the red bars in Figure 3. OOD generalization results for all model architectures and pretraining styles can be found in Appendix A.2; Appendix A.3 shows median AUC-ROC scores. \begin{table} \begin{tabular}{l c c} \hline \hline Dataset \(\downarrow\) & **ResNet-50** & **ViT-B/16** \\ \hline noise & 0.992 & 0.993 \\ SQU & 0.929 & 0.940 \\ ALPH & 0.881 & 0.889 \\ SHA & 0.855 & 0.861 \\ NAT & 0.788 & 0.805 \\ \hline \hline \end{tabular} \end{table} Table 2: **Average pairwise cosine similarity between CLIP embeddings of training stimuli within each dataset.** Because \(n=6,400\) for each dataset, averages are computed over approximately 200 pairs. We extract CLIP embeddings _before_ fine-tuning on the same-different task. For similarities afterwards, see Appendix A.6. ## 4 Examination of Inductive Biases ### Grayscale and Graysmasked Objects What features are most important for learning a genuinely abstract same-different relation, and do vision models have inductive biases towards these useful features? Previous work has claimed that CNN models trained on ImageNet are often biased towards texture over shape (Geirhos et al., 2019; Hermann et al., 2020), which seems consistent with results from Kim et al. (2018) showing low performance for CNNs trained from scratch on abstract shapes. Based on the results in the previous section, shape information appears to be the most important for effective OOD generalization, which motivates us to investigate whether models have an inductive bias towards it for the visual same-different task. One way of examining this is by training models on one of three variants of the Shapes dataset: objects are either kept the same (Figure 4a, "Colored"), grayscaled to preserve texture but remove color (Figure 4a, "Grayscale"), or completely masked over in gray to remove both texture and color (Figure 4a, "Masked"). If a model is biased towards color, we would expect performance to drop on the Grayscale and Masked datasets, and if it is biased towards texture, we would expect performance to suffer on the Masked dataset. Only a model biased towards shape would generalize effectively to all three settings. Figure 5 shows the results of this experiment for randomly initialized and CLIP-pretrained models (ImageNet results in Appendix B.1). Figure 5A shows the test performance of three versions of randomly initialized ViT-B/16, trained on either Color, Grayscale, or Masked versions of the Shapes dataset (Figure 4a) and then tested on novel objects from each of those distributions. From this subplot, we can see that ViT-B/16 trained from scratch can only achieve high in-distribution accuracy (shown as hatched bars) for Color Shapes (\(92.9\%\)); the hatched gray and dark gray bars representing in-distribution accuracy for Grayscale and Masked Shapes are much lower (\(78.8\%\) and \(53.5\%\) respectively). Although ViTs trained from scratch on the Color Shapes dataset attain high in-distribution accuracy, performance drops to \(66.2\%\) and \(62.6\%\) when generalizing out-of-distribution to Grayscale and Masked Shapes, indicated by the two lower gray bars beside the hatched green bar. This gap suggests that ViT-B/16 may only learn to compare object _color_ when it is trained from scratch on the Color Shapes dataset, leading to greater errors when tested on datasets that do not contain color. Figure 5B shows that CLIP pretraining weakens this inductive bias towards color, allowing for high in-distribution accuracy and near-perfect out-of-distribution generalization when trained on any of the three modified Shapes datasets. When ResNet-50 is trained from scratch, we do not see evidence of an inductive bias towards color or texture, since bars of all colors in Figure 5C are at roughly equal heights. However, this bias reappears after pretraining (albeit to a lesser extent than for randomly-initialized ViT), with a \(7.3\%\) gap between in-distribution and Masked OOD accuracy for CLIP ResNet-50 fine-tuned on Color Shapes (Figure 5D). Figure 4: **Examples of stimuli used to test inductive biases. Figure (a) shows examples of objects from the three versions of the Shapes dataset used to produce results in Figure 5. Figures (b) and (c) are examples of stimuli with conflicting signals used in Section 4.2, where either color is the same while texture and shape are different, or color and texture are the same while shape is different.** ### Dissociating Color, Texture, and Shape Results from Figure 5 suggest that some models learn to rely on certain features more than others to differentiate between objects in an image. We run an additional experiment to verify these claims by creating new stimuli for which color, texture, and shape send conflicting signals: for example, two objects in an image might be the same color but have different textures and shapes (Figure 4b). We then evaluate each model from Figure 5 on every possible combination of conflicting signals according to experiment details in Appendix B.2 to identify which features each model might be more biased towards. Median results are reported across five random seeds for the same set of hyperparameters. First, our results confirm that randomly-initialized ViT-B/16 is heavily biased towards color when trained on the Color Shapes dataset. When trained completely from scratch on Color Shapes, ViT-B/16 will predict "same" for any stimuli in which the colors of the two objects are the same, even if color is the only similarity between the two objects (classifying \(85.6\%\) of stimuli like Figure 4b as "same"). In contrast, CLIP ViT-B/16 fine-tuned on the same Color Shapes data classifies only \(12.3\%\) of stimuli like Figure 4b as "same." However, our results also show that even CLIP ViT-B/16, which appears to be unbiased, exhibits different patterns of behavior depending on the dataset it is fine-tuned on. When tested on stimuli where color and texture are the same and shape is different (Figure 4c), CLIP ViT-B/16 fine-tuned on Color Shapes classifies \(88.8\%\) of these stimuli as "same." The same model fine-tuned on Masked Shapes classifies \(98.2\%\) of these images as "different." All results can be found in Appendix B.2. A similar phenomenon can be observed in pretrained ResNet-50 models. For example, CLIP ResNet-50 fine-tuned on Masked Shapes exhibits a strong bias towards shape, classifying \(\geq 89.8\%\) of objects with matching shapes as "same" while classifying \(\geq 93.3\%\) of objects with mismatching shapes as "different." Thus, even though pretraining can mitigate the strength of ViT inductive biases, all pretrained architectures still exhibit specific, understandable biases that are determined by the features available in their training data. Moreover, these distinct patterns of generalization provide a glimpse into how these networks are computing same-versus-different judgments. Figure 5: **Test accuracy for ViT-B/16 fine-tuned on one version of the Shapes dataset – Color, Grayscale, or Masked – and tested on all three datasets.** See Figure 4a for example objects from each of the three datasets. Bars are grouped by fine-tuning dataset as indicated along the x-axis. Hatched bars indicate in-distribution accuracy. Median accuracy over five seeds is reported with individual runs plotted as points over each bar. See Appendix B.1 for ImageNet and ResNet results. ## 5 Related Work **Prior work on the same-different relation.** Learning the same-different relation when the two objects being compared occupy the same image appears to be the most challenging setting for deep neural networks. Most closely related to our work is Puebla & Bowers (2022). Their setup is very similar to ours in a few respects: they define the same-different task identically to us, fine-tuning ImageNet pretrained ResNet-50 models on the task using stimuli from Fleuret et al. (2011) (which are nearly identical to our Squiggles stimuli), and testing OOD generalization on nine evaluation sets. They find that their models fail to generalize out-of-distribution and draw the conclusion that current CNNs are unable to learn the relation. Replicating their setup, we find that the difference in our results is due to the differing architectures and pretraining methods we investigated. Our ImageNet ResNet-50 model fine-tuned on Squigles also struggles to generalize to the evaluation sets used in Puebla & Bowers (2022), but our CLIP ViT-B/16 model fine-tuned on Squigles generalizes perfectly or nearly perfectly to seven out of nine of these sets. See Appendix A.4 for details. Additionally, Funke et al. (2021) report that ImageNet pretrained ResNet-50 models fine-tuned on stimuli from Fleuret et al. (2011) can generalize the relation to in-distribution test stimuli in this setting, but OOD generalization is not tested. The double-starred (\(**\)) bar in Figure 3 replicates their results. Messina et al. (2022) show that a recurrent, hybrid CNN+ViT can attain high in-distribution test accuracy when objects occupy the same image, while (Webb et al., 2023b) demonstrate success using slot attention to segment the objects. Otherwise, successful generalization of the same-different relation with deep neural networks has been limited to a setting where objects are segmented into two separate inputs by humans and separately passed into a neural network (Kim et al., 2018; Webb et al., 2020; Kerg et al., 2022; Altabaa et al., 2023; Geiger et al., 2023). **Abstract relation learning.** More generally, our work relates to a larger body of work concerned with the abilities of deep neural networks to learn abstract relations. The ability to abstract from sparse sensory data is theorized to be fundamental to human intelligence (Tenenbaum et al., 2011; Ho, 2019) and is strengthened by the acquisition of language (Gentner & Hoyos, 2017). In contrast, standard deep neural networks often struggle to learn relational reasoning from sensory data alone, even when the training corpus is very large (Mitchell, 2021; Davidson et al., 2023). This is often pointed to as a key discrepancy between human and machine visual systems. ## 6 Discussion and Conclusion Previous work has argued that deep neural networks struggle to learn the same-different relation between two objects in the same image (Kim et al., 2018; Puebla & Bowers, 2022), but the scope and nature of these difficulties are not fully understood. In this article, we explored a range of architectures, pretraining styles, and fine-tuning datasets in order to thoroughly investigate the ability of neural networks to learn and generalize the same-different relation. Some of our model configurations are able to generalize the relation across all of our out-of-distribution evaluation datasets; the best model is CLIP-pretrained ViT fine-tuned on the Squiggles same-different task. Across five random seeds, this model yields a median test accuracy of nearly \(100\%\) on every evaluation dataset we use. The existence of such a model suggests that deep neural networks can learn generalizable representations of the same-different relation, at least for the tests we examined. There are a number of possible reasons why CLIP-pretrained Vision Transformers exhibit the strongest out-of-distribution generalization. CLIP pretraining may be helpful because of the diversity of the dataset, which Fang et al. (2022) argue is key in the robust generalization of CLIP models in other settings. Another hypothesis is that linguistic supervision from captions containing phrases like "same," "different," or "two of" (which ImageNet models would have no exposure to) helps models to separate same and different objects in their visual embedding spaces, an idea supported by the results of our linear probe experiments (Appendix A.5). ViTs may perform the best on the same-different task because of their larger receptive field size; CNNs can only compare distant image patches in deeper layers, whereas ViTs can compare any image patch to any other as early as the first self-attention layer. Thus, ViTs may be able to integrate complex shape information and compare individual objects to each other more efficiently than CNNs. What mechanisms enable certain models to generalize the same-different relation? It is possible that models learn an internal circuit (e.g. Nanda et al. (2023)) in which they segment two objects and compute their equality in embedding space, implicitly implementing the components of relational architectures from earlier works that explicitly separate objects. This would amount to true abstraction and would theoretically enable models to generalize to any distribution of same-different stimuli. Indeed, CLIP ViT-B/16 also generalizes with near perfect accuracy to multiple evaluation sets used in other work (see Appendix A.4). Since our results strongly suggest that neural networks _can_ learn generalizable same-different relations, the next step for future work is to investigate the internal workings of successful models. We believe that further progress in understanding abstraction ought to come from circuit-level investigations into the same-different relation, as well as other abstract visual relations. Such investigations may reveal additional relational reasoning capabilities that have long been considered out of reach for standard deep neural networks.
2303.06828
Two-step Band-split Neural Network Approach for Full-band Residual Echo Suppression
This paper describes a Two-step Band-split Neural Network (TBNN) approach for full-band acoustic echo cancellation. Specifically, after linear filtering, we split the full-band signal into wide-band (16KHz) and high-band (16-48KHz) for residual echo removal with lower modeling difficulty. The wide-band signal is processed by an updated gated convolutional recurrent network (GCRN) with U$^2$ encoder while the high-band signal is processed by a high-band post-filter net with lower complexity. Our approach submitted to ICASSP 2023 AEC Challenge has achieved an overall mean opinion score (MOS) of 4.344 and a word accuracy (WAcc) ratio of 0.795, leading to the 2$^{nd}$ (tied) in the ranking of the non-personalized track.
Zihan Zhang, Shimin Zhang, Mingshuai Liu, Yanhong Leng, Zhe Han, Li Chen, Lei Xie
2023-03-13T03:07:01Z
http://arxiv.org/abs/2303.06828v1
# Two-Step Band-Split Neural Network Approach for Full-Band Residual Echo Suppression ###### Abstract This paper describes a Two-step Band-split Neural Network (TBNN) approach for full-band acoustic echo cancellation. Specifically, after linear filtering, we split the full-band signal into wide-band (16KHz) and high-band (16-48KHz) for residual echo removal with lower modeling difficulty. The wide-band signal is processed by an updated gated convolutional recurrent network (GCRN) with U\({}^{2}\) encoder while the high-band signal is processed by a high-band post-filter net with lower complexity. Our approach submitted to ICASSP 2023 AEC Challenge has achieved an overall mean opinion score (MOS) of 4.344 and a word accuracy (WAcc) ratio of 0.795, leading to the 2\({}^{nd}\) (tied) in the ranking of the non-personalized track. Zihan Zhang\({}^{1}\), Shimin Zhang\({}^{1}\), Mingshuai Liu\({}^{1}\), Yanhong Leng\({}^{2}\), Zhe Han\({}^{2}\), Li Chen\({}^{2}\), Lei Xie\({}^{1,*}\)\({}^{1}\)Audio, Speech and Language Processing Group (ASLP@NPU), School of Computer Science, Northwestern Polytechnical University, Xi'an, China \({}^{2}\)ByteDance, China Acoustic echo cancellation, noise suppression, band-split, two-step network. ## 1 Introduction The 4th acoustic echo cancellation (AEC) challenge [1] is a flagship event of the ICASSP 2023 signal processing grand challenge, with the aim to benchmark AEC techniques in real-time full-band (48KHz) speech communication. In this challenge, our team has submitted a _hybrid_ approach that combines a linear filter with a neural post-filter to the non-personalized AEC track (i.e., without using target speaker embedding). Since noise-like components usually dominate in higher bands of speech and structured harmonics are mainly found in wide-band signals, we propose a two-step band-split neural network (TBNN) approach to particularly handle full-band residual echo removal on wide-band (16KHz) and high-band (16-48KHz) in a two-step process, as an extension of our previous work [2] to better model full-band signals with low complexity. Specifically, the wide-band poster filter (WBPF) is based on the gated convolutional recurrent network (GCRN) [2] but with upgraded U\({}^{2}\) encoder [3] for better latent feature extraction. We also redesign the data simulation method and the loss function to accommodate full-band signals. According to the results, our system has ranked 2nd place (tied rank) in the non-personalized track with an overall mean opinion score (MOS) of 4.344 and a word accuracy (WAcc) ratio of 0.795. ## 2 Proposed Method ### Problem formulation For a typical full-duplex communication system consisting of a microphone and a loudspeaker, the signal model can be expressed as \[d(n)=s(n)+r(n)+v(n)+z(n) \tag{1}\] \[z(n)=h(n)*\mathcal{F}\{x(n)\} \tag{2}\] where the microphone signal \(d(n)\) is composed of the near-end speech signal \(s(n)\) which may include early reflections, late reverberation \(r(n)\), additive noise \(v(n)\), and echo signal \(z(n)\), where \(n\) is the sampling index. The echo signal \(z(n)\) is obtained by convolving the echo path \(h(n)\) with the nonlinear distorted reference signal \(x(n)\), shown in Eq. (2), where \(\mathcal{F}\) refers to the nonlinear distortions, \(*\) refers to convolution. An AEC system aims to cancel \(z(n)\) from \(d(n)\) given \(x(n)\). For complex real-life scenarios, noise \(v(n)\) and reverberation \(r(n)\) also need to be suppressed. We adopt a time delay estimation (TDE) module based on the sub-band cross-correlation to align the reference signal. Our linear filter uses a sub-band adaptive filtering method based on the NLMS algorithm to estimate the linear echo signal \(y\) and the error signal \(e\). ### TBNN post-filter The TBNN post-filter takes \(d\), \(e\), and \(y\) after a short-time Fourier transform (STFT) as input. The complex-valued input spectra are compressed by a factor of 0.5, stacked by real and imaginary parts, and divided into wide-band signal \(I_{\text{WB}}\) and high-band signal \(I_{\text{HB}}\). Considering the complexity of direct full-band modeling and the high-band has less structural information, we use a two-step band-split approach to model full-band signals. The wide-band post-filter (WBPF) first uses spectral mapping to estimate the wide-band output \(\hat{S}_{\text{WB}}\). Then the high-band post-filter (HBPF) uses \(I_{\text{HB}}\) and priori information \(\hat{S}_{\text{WB}}\) to predict the complex-valued mask \(M_{\text{HB}}\), which is multiplied by \(I_{\text{HB}}\) to obtain the high-band output \(\hat{S}_{\text{HB}}\). The backbone of the WBPF module is based on our previous gated convolutional recurrent network (GCRN) [2]. But differently, the encoder consists of 5 U\({}^{2}\)-Encoder layers as shown in Fig. 1(b). U\({}^{2}\)-Net [3] is a two-level nested U-structure designed to capture more contextual information from different scales (through Residual U-blocks) without significantly increasing the computational cost. Specifically, each U\({}^{2}\)-Encoder layer is composed of a gated convolution (GConv) layer, a batch-norm (BN) layer, a pReLU and an UNet-block. The number of input and output channels remains the same. The FTLSTM layer is served as a bottleneck for temporal modeling, as defined in [4]. The decoder consists of 5 decoder blocks as shown in Fig. 1(c), where TrConv refers to the Transpose-Conv2d layer, and 0 refers to multiply. The skip connection (implemented with 1x1 convolution) is applied between the encoder and decoder. We also add a voice activity detection (VAD) module to avoid over-suppression of near-end speech [2]. The HBBPF module uses three 2D-Conv modules to extract high-band features. Each 2D-Conv module consists of a 2D convolution (Conv2d) layer, an ELU layer, a BN layer and a dropout layer with a 0.25 dropout rate [5]. The wide-band output serves as prior information and it is concatenated with the high-band features after feature dimension alignment. Then we use the GRU layer and Conv2d layer to predict the real/imaginary mask, which is applied to the high-band input to obtain the high-band output.
2307.04510
An analysis of least squares regression and neural networks approximation for the pricing of swing options
Least Squares regression was first introduced for the pricing of American-style options, but it has since been expanded to include swing options pricing. The swing options price may be viewed as a solution to a Backward Dynamic Programming Principle, which involves a conditional expectation known as the continuation value. The approximation of the continuation value using least squares regression involves two levels of approximation. First, the continuation value is replaced by an orthogonal projection over a subspace spanned by a finite set of $m$ squared-integrable functions (regression functions) yielding a first approximation $V^m$ of the swing value function. In this paper, we prove that, with well-chosen regression functions, $V^m$ converges to the swing actual price $V$ as $m \to + \infty$. A similar result is proved when the regression functions are replaced by neural networks. For both methods (least squares or neural networks), we analyze the second level of approximation involving practical computation of the swing price using Monte Carlo simulations and yielding an approximation $V^{m, N}$ (where $N$ denotes the Monte Carlo sample size). Especially, we prove that $V^{m, N} \to V^m$ as $N \to + \infty$ for both methods and using Hilbert basis in the least squares regression. Besides, a convergence rate of order $\mathcal{O}\big(\frac{1}{\sqrt{N}} \big)$ is proved in the least squares case. Several convergence results in this paper are based on the continuity of the swing value function with respect to cumulative consumption, which is also proved in the paper and has not been yet explored in the literature before for the best of our knowledge.
Christian Yeo
2023-07-10T12:12:42Z
http://arxiv.org/abs/2307.04510v1
An analysis of least squares regression and neural networks approximation for the pricing of swing options ###### Abstract Least Squares regression was first introduced for the pricing of American-style options, but it has since been expanded to include swing options pricing. The swing options price may be viewed as a solution to a Backward Dynamic Programming Principle, which involves a conditional expectation known as the continuation value. The approximation of the continuation value using least squares regression involves two levels of approximation. First, the continuation value is replaced by an orthogonal projection over a subspace spanned by a finite set of \(m\) squared-integrable functions (regression functions) yielding a first approximation \(V^{m}\) of the swing value function. In this paper, we prove that, with well-chosen regression functions, \(V^{m}\) converges to the swing actual price \(V\) as \(m\to+\infty\). A similar result is proved when the regression functions are replaced by neural networks. For both methods (least squares or neural networks), we analyze the second level of approximation involving practical computation of the swing price using Monte Carlo simulations and yielding an approximation \(V^{m,N}\) (where \(N\) denotes the Monte Carlo sample size). Especially, we prove that \(V^{m,N}\to V^{m}\) as \(N\to+\infty\) for both methods and using Hilbert basis in the least squares regression. Besides, a convergence rate of order \(\mathcal{O}\big{(}\frac{1}{\sqrt{N}}\big{)}\) is proved in the least squares case. Several convergence results in this paper are based on the continuity of the swing value function with respect to cumulative consumption, which is also proved in the paper and has not been yet explored in the literature before for the best of our knowledge. _Keywords - Swing options, stochastic control, least squares regression, convergence analysis, neural networks approximation, dynamic programming equation._ ## Introduction Swing contracts [14, 15] are commonly used in commodity derivatives trading to manage commodity supply. These contracts allow the holder to purchase amounts of energy on specific dates (called "exercise dates"), subject to constraints. The pricing [11, 1, 2, 13, 15] of such a contract is a challenging problem that involves finding a vector that represents the amounts of energy purchased through the contract, while maximizing the gained value. This problem is doubly-constrained (exercise dates constraint and volume constraints) and its pricing had been addressed using two groups of methods in the literature. One group concerns methods that are based on the Backward Dynamic Programming Principle (BDPP) [1, 2], which determines the swing price backwardly from the expiry of the contract until the pricing date. In the BDPP-based approach, at each exercise date, the swing value is determined as the maximum of the current cash flows plus the continuation value, which is the (conditional) expected value of future cash flows. To compute the continuation value, nested simulations may be used, but this can be time-consuming. Alternatively, an orthogonal projection over a vector space spanned by a finite set of squared-integrable functions may be used, based on the idea of the "least squares regression" method introduced by Longstaff and Schwartz [11]. This method was initially introduced for the pricing of American-style options [14, 15, 20] and had been then used for some stochastic control problems [1, 17] and especially in the context of swing contract pricing [1]. Despite being widely used by practitioners, in the context of swing pricing, this method has received little study in terms of convergence. The paper [1] analyzes the convergence of general regression methods in the context of stochastic control problems. While swing contracts pricing is, by nature, a stochastic control problem, such contracts involves specificities whose analysis goes beyond the scope covered in the paper [1]. Note that this paper focuses on the pricing of swing contracts within the firm constraints framework, where the contract holder cannot violate volume constraints. In this framework, the set of admissible controls at each exercise date depends on the cumulative consumption up to that date. Additionally, in the BDPP-based approaches, the optimal control at one exercise date depends on the estimated value of the swing contract at the next exercise date, which in turns is defined as a supremum. Thus, the error propagation through the BDPP meets uniform convergence issue. Taking into account the latter fact, to meet the framework studied in [1], cumulative consumption may need to be included as a state variable along with the Markov process driving the underlying asset price. However, this can be challenging to implement as it requires to know the joint distribution of the underlying asset price and the cumulative consumption. This difficulty is perceptible in [1] where, in the context of storage pricing (contracts whose pricing is closed to that of swing contracts), the authors have used uniform sampling for cumulative consumption as a proxy. Furthermore, in [1] strong assumptions had been made, such as the boundedness of regression functions, which do not hold in practice. Therefore, in this paper, we aim to analyze the convergence of least squares regression for the specific problem of swing options pricing. Besides, we do not restrict ourselves to least squares method and analyze an alternative method which consist in approximating the continuation value, not by an orthogonal projection but, using neural networks. Both methods for approximating the swing contract price are analyzed in a common framework. To achieve this, we proceed as in previous works [1, 1] by proving some convergence results into two main steps. We first replace the continuation value by either an orthogonal projection over a well-chosen basis of regression functions or by neural network. We demonstrate that the resulting swing value function, as an approximation of the actual one, converges towards the actual one as the number of functions in the regression basis or the number of units per hidden layer (in the neural network) increases. Furthermore, practically, a Monte Carlo simulation has to be performed. This is needed to compute the orthogonal projection coordinates in the least squares method; which generally has no closed form while it serves as input for training the neural network. This leads to a second level of approximation, a Monte Carlo approximation. In this paper, we prove that, under some assumptions, this second approximation converges to the first one for both studied methods. Moreover, in the least squares method, a rate of order \(\mathcal{O}(N^{-1/2})\) (\(N\) being the size of the Monte Carlo sample) of the latter convergence is proved. Several results in this paper depend on the continuity of the swing value function with respect to the cumulative consumption, which is a crucial result that has not yet been proved for the best of our knowledge. We establish this continuity result using Berge's maximum theorem, which is commonly used to analyze the regularity of optimal control and optimal value functions in parametric optimization contexts. Additionally, proving the continuity of the value function with respect to the cumulative consumption also serves as another proof of the existence of an optimal control, which was previously demonstrated differently in [1]. ### Organization of the paper Section 1. provides general background on swing contracts. We thoroughly discuss its pricing and show one of the main results concerning the continuity of the swing value function. Section 2. We describe how to approximate the swing value function using either least squares regression or neural networks and fix notations and assumptions that will be used in the sequel. Section 3. We state the main convergence results of this paper as well as some other technical results concerning some concentration inequalities. ### Notations We endow the space \(\mathbb{R}^{d}\) with the Euclidean norm denoted by \(|\cdot|\) and the space of \(\mathbb{R}^{d}\)-valued and squared-integrable random variables \(\mathbb{L}_{\mathbb{R}^{d}}^{2}\big{(}\mathbb{P}\big{)}\) with the canonical norm \(||\cdot||_{2}\). \(\langle\cdot,\cdot\rangle\) will denote Euclidean inner-product of \(\mathbb{R}^{d}\). We denote by \(|\cdot|_{\sup}\) the sup-norm on functional spaces. \(\mathbb{M}_{d,q}\big{(}\mathbb{R}\big{)}\) will represent the space of matrix with \(d\) rows, \(q\) columns and with real coefficients. When there is no ambiguity, we will consider \(|\cdot|\) as the Frobenius norm; the space \(\mathbb{M}_{d,q}\big{(}\mathbb{R}\big{)}\) will be equipped with that norm. For \(m\geq 2\), we denote by \(\mathbb{G}L_{m}\big{(}\mathbb{R}\big{)}\) the subset of \(\mathbb{M}_{m,m}\big{(}\mathbb{R}\big{)}\) made of non-singular matrices. For a metric space \((E,d)\) and a subset \(A\subset E\), we define the distance between \(x\in E\) and the set \(A\) by, \[d(x,A)=\inf_{y\in A}\,d(x,y).\] We denote by \(d_{H}(A,B)\) the Hausdorff metric between two closed, bounded and non-empty sets \(A\) and \(B\) (equipped with a metric \(d\)) which is defined by \[d_{H}(A,B)=\max\Big{(}\sup_{a\in A}\,d(a,B),\,\,\sup_{b\in B}\,d(b,A)\Big{)}.\] Let \(E\) be a real pre-Hilbert space equipped with a inner-product \(\langle\cdot,\cdot\rangle\) and consider \(x_{1},\ldots,x_{n}\) some vectors of \(E\). The Gram matrix associated to \(x_{1},\ldots,x_{n}\) is the symmetric non-negative matrix whose entries are \(\big{(}\langle x_{i},x_{j}\rangle\big{)}_{1\leq i,j\leq n}\). The determinant of the latter matrix, the Gram determinant, will be denoted by \(G(x_{1},\ldots,x_{n}):=\det\big{(}\langle x_{i},x_{j}\rangle\big{)}_{1\leq i,j\leq n}\). ## 1 Swing contract In the first section, we establish the theoretical foundation for swing contracts and their pricing using the "Backward Dynamic Programming Principle". Additionally, we prove some theoretical properties concerning the set of optimal controls that is involved in the latter principle. ### Description Swing option allows its holder to buy amounts of energy \(q_{k}\) at times \(t_{k}\), \(k=0,...,n-1\) (called exercise dates) until the contract maturity \(t_{n}=T\). At each exercise date \(t_{k}\), the purchase price (or strike price) is denoted \(K_{k}\) and can be constant (i.e \(K_{k}=K,k=0,...,n-1\)) or indexed on a formula. In the indexed strike setting, the strike price is calculated as an average of observed commodity prices over a certain period. In this paper, we only consider the fixed strike price case. However the indexed strike price case can be treated likewise. In addition, swing option gives its holder a flexibility on the amount of energy he is allowed to purchase through some (firm) constraints: * **Local constraints**: at each exercise time \(t_{k}\), the holder of the swing contract has to buy at least \(q_{\min}\) and at most \(q_{\max}\) i.e, \[q_{\min}\leq q_{k}\leq q_{\max}\quad,\,0\leq k\leq n-1.\] (1.1) * **Global constraints**: at maturity, the cumulative purchased volume must be not lower than \(Q_{\min}\) and not greater than \(Q_{\max}\) i.e, \[Q_{n}=\sum_{k=0}^{n-1}q_{k}\in[Q_{\min},Q_{\max}]\quad,\;\text{with}\;\;Q_{0}=0 \;\text{and}\;0\leq Q_{\min}\leq Q_{\max}<+\infty. \tag{1.2}\] At each exercise date \(t_{k}\), the achievable cumulative consumption lies within the following interval, \[\mathcal{T}_{k}:=\big{[}Q^{down}(t_{k}),Q^{up}(t_{k})\big{]}, \tag{1.3}\] where \[\left\{\begin{array}{l}Q^{down}(t_{0})=0,\\ Q^{down}(t_{k})=\max\big{(}0,Q_{\min}-(n-k)\cdot q_{\max}\big{)},\;\;\;k\in\{1, \ldots,n-1\},\\ Q^{down}(t_{n})=Q_{\min},\end{array}\right.\] \[\left\{\begin{array}{l}Q^{up}(t_{0})=0,\\ Q^{up}(t_{k})=\min\big{(}k\cdot q_{\max},Q_{\max}\big{)},\;\;\;k\in\{1,\ldots, n-1\},\\ Q^{up}(t_{n})=Q_{\max}.\end{array}\right.\] Note that in this paper we only consider **firm constraints** which means that the holder of the contract cannot violate the constraints. However there exists in the literature alternative settings where the holder can violate the global constraints (not the local ones) but has to pay, at the maturity, a penalty which is proportional to the default (see [1, 1]). The pricing of swing contract is closely related to the resolution of a backward equation given by the Backward Dynamic Programming Principle. ### Backward Dynamic Programming Principle (BDPP) Let \((\Omega,\mathcal{F},\{\mathcal{F}_{t}\},\mathbb{P})\) be a filtered probability space. We assume that there exists a \(d\)-dimensional (discrete) Markov process \(\big{(}X_{t_{k}}\big{)}_{0\leq k\leq n}\) and a measurable function \(g_{k}:\mathbb{R}^{d}\rightarrow\mathbb{R}\) such that the spot price \((S_{t_{k}})_{0\leq k\leq n}\) is given by \(S_{t_{k}}=g_{k}\big{(}X_{t_{k}}\big{)}\). Throughout this paper, the function \(g_{k}\) will be assumed to have at most linear growth. The decision process \((q_{k})_{0\leq k\leq n-1}\) is defined on the same probability space and is supposed to be \(\mathcal{F}_{t_{k}}^{X}\)-adapted, where \(\mathcal{F}_{t_{k}}^{X}\) is the natural (completed) filtration of \(\big{(}X_{t_{k}}\big{)}_{0\leq k\leq n}\). In the swing context, at each time \(t_{k}\), by purchasing a volume \(q_{k}\), the holder of the contract makes an algebraic profit \[\psi\big{(}q_{k},X_{t_{k}}\big{)}:=q_{k}\cdot\big{(}g_{k}\big{(}X_{t_{k}} \big{)}-K\big{)}. \tag{1.4}\] Then for every non-negative \(\mathcal{F}_{t_{k-1}}^{X}\)- measurable random variable \(Q_{k}\) (representing the cumulative purchased volume up to \(t_{k-1}\)), the price of the swing option at time \(t_{k}\) is \[V_{k}\left(X_{t_{k}},Q_{k}\right)=\underset{(q_{\ell})\leq\ell\leq n-1}{\text {ess}\sup}\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \ stochastic control problem. It can be shown (see [1]) that for all \(k=0,\ldots,n-1\) and for all \(Q_{k}\in\mathcal{T}_{k}\), the swing contract price is given by the following backward equation, also known as the dynamic programming equation: \[\left\{\begin{array}{l}V_{k}(x,Q_{k})=\sup_{q\in Adm(t_{k},Q_{k})}\psi(q,x)+ \mathbb{E}\big{(}V_{k+1}(X_{t_{k+1}},Q_{k}+q)|X_{t_{k}}=x\big{)},\\ V_{n-1}(x,Q_{n-1})=\sup_{q\in Adm(t_{n-1},Q_{n-1})}\psi(q,x),\end{array}\right. \tag{1.7}\] where \(Adm(t_{k},Q_{k})\) is the set of admissible controls at time \(t_{k}\), with \(Q_{k}\) denoting the cumulative consumption up to time \(t_{k-1}\). Note that, if our objective is the value function, that is \(V_{k}(x,Q_{k})\) for any \(x\in\mathbb{R}\) defined in (1.7), then the set \(Adm(t_{k},Q_{k})\) reduces to the following interval, \[\mathcal{I}_{k+1}\big{(}Q_{k}\big{)}:=\Big{[}\max\big{(}q_{\min},Q^{down}(t_{ k+1})-Q_{k}\big{)},\min\big{(}q_{\max},Q^{up}(t_{k+1})-Q_{k}\big{)}\Big{]}. \tag{1.8}\] But if our objective is the random variable \(V_{k}\big{(}X_{t_{k}},Q_{k}\big{)}\), then, for technical convenience, the preceding set \(Adm(t_{k},Q_{k})\) is the set of all \(\mathcal{F}_{t_{k}}^{X}\)-adapted processes lying within the interval \(\mathcal{I}_{k+1}\big{(}Q_{k}\big{)}\) defined in (1.8). A straightforward consequence of the latter is that the optimal control at a given date must not be anticipatory. It is worth noting the "bang-bang" feature of swing contracts proved in [1]. That is, if volume constraints \(q_{\min},q_{\max},Q_{\min},Q_{\max}\) are whole numbers (this corresponds to the actual setting of traded swing contracts) and \(Q_{\max}-Q_{\min}\) is a multiple of \(q_{\max}-q_{\min}\), then the supremum in the BDPP (1.7) is attained in one of the boundaries of the interval \(\mathcal{I}_{k+1}\big{(}Q_{k}\big{)}\) defined in (1.8). In this "discrete" setting, at each exercise date \(t_{k}\), the set of achievable cumulative consumptions \(\mathcal{T}_{k}\) defined in (1.3) reads, \[\mathcal{T}_{k}=\mathbb{N}\cap\big{[}Q^{down}(t_{k}),Q^{up}(t_{k})\big{]}, \tag{1.9}\] where \(Q^{down}(t_{k})\) and \(Q^{up}(t_{k})\) are defined in (1.3). In this discrete setting, the BDPP (1.7) remains the same. The main difference lies in the fact that, in the discrete setting the supremum involved in the BDPP is in fact a maximum over two possible values enabled by the bang-bang feature. From a practical standpoint, this feature allows to drastically reduce the computation time. Note that this paper aims to study some regression-based methods designed to approximate the conditional expectation involved in the BDPP (1.7). We study two methods which are based on least squares regression and neural network approximation. In the least squares regression, we will go beyond the discrete setting and show that convergence results can be established in general. To achieve this, we need a crucial result which states that the swing value function defined in equation (1.7) is continuous with respect to cumulative consumption. The latter may be established by relying on Berge's maximum theorem (see Proposition A.7 in Appendix A.2). We may justify the use of this theorem through the following proposition, which characterizes the set of admissible volume as a correspondence (we refer the reader to Appendix A.2 for details on correspondences) mapping attainable cumulative consumption to an admissible control. **Proposition 1.1**.: _Denote by \(\mathcal{P}\big{(}[q_{\min},q_{\max}]\big{)}\) the power set of \([q_{\min},q_{\max}]\). Then for all \(k=0,...,n-1\) the correspondence_ \[\Gamma_{k}\colon\Big{(}\mathcal{T}_{k},\;|\cdot|\Big{)} \to\Big{(}\mathcal{P}\big{(}[q_{\min},q_{\max}]\big{)},\;d_{H} \Big{)}\] \[Q \mapsto Adm(t_{k},Q)\] _is continuous and compact-valued._ Proof.: Let \(k=0,...,n-1\). We need to prove the correspondence \(\Gamma_{k}\) is both lower and upper hemicontinuous. The needed materials about correspondences is given in Appendix A.2. We rely on the sequential characterization of hemicontinuity in Appendix A.2. Let us start with the upper hemicontinuity. Since the set \([q_{\min},q_{\max}]\) is compact, then the converse of Proposition A.6 in Appendix A.2 holds true. Let \(Q\in\mathcal{T}_{k}\) and consider a sequence \((Q_{n})_{n\in\mathbb{N}}\in\mathcal{T}_{k}^{\mathbb{N}}\) which converges to \(Q\). Let \((y_{n})_{n\in\mathbb{N}}\) be a real-valued sequence such that for all \(n\in\mathbb{N}\), \(y_{n}\) lies in the correspondence \(\Gamma_{k}(Q_{n})\). Then using the definition of the set of admissible control we know that \(q_{\min}\leq y_{n}\leq q_{\max}\) yielding \((y_{n})_{n}\) is a real and bounded sequence. Thanks to Bolzano-Weierstrass theorem, there exists a subsequence \((y_{\phi(n)})_{n\in\mathbb{N}}\) which is convergent. Let \(y=\lim\limits_{n\to+\infty}y_{\phi(n)}\), then for all \(n\in\mathbb{N}\), \[y_{\phi(n)}\in Adm(t_{k},Q_{\phi(n)})\Longleftrightarrow\max\big{(}q_{\min}, Q^{down}(t_{k+1})-Q_{\phi(n)}\big{)}\leq y_{\phi(n)}\leq\min\big{(}q_{\max},Q^{up} (t_{k+1})-Q_{\phi(n)}\big{)}.\] Letting \(n\to+\infty\) in the preceding inequalities yields \(y\in\Gamma_{k}(Q)\). Which shows that \(\Gamma_{k}\) is upper hemicontinuous at an arbitrary \(Q\). Thus the correspondence \(\Gamma_{k}\) is upper hemicontinuous. For the lower hemicontinuity part, let \(Q\in\mathcal{T}_{k}\), \((Q_{n})_{n\in\mathbb{N}}\in\mathcal{T}_{k}^{\mathbb{N}}\) be a sequence which converges to \(Q\) and \(y\in\Gamma_{k}(Q)\). Note that if \(y=\max(q_{\min},Q^{down}(t_{k+1})-Q)\) (or \(y=\min(q_{\max},Q^{up}(t_{k+1})-Q)\)) then it suffices to consider \(y_{n}=\max(q_{\min},Q^{down}(t_{k+1})-Q_{n})\) (or \(y_{n}=\min(q_{\max},Q^{up}(t_{k+1})-Q_{n})\)) so that \(y_{n}\in\Gamma_{k}(Q_{n})\) for all \(n\in\mathbb{N}\) and \(\lim\limits_{n\to+\infty}y_{n}=y\). It remains the case \(y\in\mathring{\Gamma}_{k}(Q)\) (where \(\mathring{A}\) denotes the interior of the set \(A\)). Thanks to Peak point Lemma 1 one may extract a monotonous subsequence \((Q_{\phi(n)})_{n}\). Two cases may be distinguished. Footnote 1: see Theorem 3.4.7 in [https://www.geneseo.edu/~aguilar/public/assets/courses/324/real-analysis-cesar-aguilar.pdf](https://www.geneseo.edu/~aguilar/public/assets/courses/324/real-analysis-cesar-aguilar.pdf) or in [https://proofwiki.org/wiki/Peak_Point_Lemma](https://proofwiki.org/wiki/Peak_Point_Lemma) * \((Q_{\phi(n)})_{n}\) is a non-decreasing sequence. In this case, for all \(n\in\mathbb{N}\), \(Q_{\phi(n)}\leq Q\). Since \(y\in\mathring{\Gamma}_{k}(Q)\) and \(Q\mapsto\min(q_{\max},Q^{up}(t_{k+1})-Q)\) is a non-increasing function, it follows \(y<\min(q_{\max},Q^{up}(t_{k+1})-Q)\leq\min(q_{\max},Q^{up}(t_{k+1})-Q_{\phi(n )})\) for all \(n\in\mathbb{N}\). Moreover since \(y>\lim\limits_{n\to+\infty}\max(q_{\min},Q^{down}(t_{k+1})-Q_{\phi(n)})\downarrow \max(q_{\min},Q^{down}(t_{k+1})-Q)\), one may deduce that there exists \(n_{0}\in\mathbb{N}\) such that for all \(n\geq n_{0}\), \(y\geq\max(q_{\min},Q^{down}(t_{k+1})-Q_{\phi(n)})\). Therefore it suffices to set \(y_{n}=y\) for all \(n\geq n_{0}\) so that \((y_{n})_{n\geq n_{0}}\) is a sequence such that \(\lim\limits_{n\to+\infty}y_{n}=y\) and \(y_{n}\in\Gamma_{k}(Q_{\phi(n)})\) for all \(n\geq n_{0}\). * \((Q_{\phi(n)})_{n}\) is a non-increasing sequence. Here for all \(n\in\mathbb{N}\), we have \(Q_{\phi(n)}\geq Q\) so that \(y\geq\max(q_{\min},Q^{down}(t_{k+1})-Q_{\phi(n)})\). Following the proof in the preceding case, one may deduce that there exists \(n_{0}\in\mathbb{N}\) such that for all \(n\geq n_{0}\), \(y\leq\min(q_{\max},Q^{up}(t_{k+1})-Q_{\phi(n)})\). Thus it suffices to set a sequence \((y_{n})_{n\geq n_{0}}\) identically equal to \(y\). This shows that the correspondence \(\Gamma_{k}\) is lower hemicontinuous at an arbitrary \(Q\). Thus \(\Gamma_{k}\) is both lower and upper hemicontinuous; hence continuous. Moreover, since for all \(Q\in\mathcal{T}_{k}\), \(\Gamma_{k}(Q)\) is a closed and bounded interval in \(\mathbb{R}\), then it is compact. This completes the proof. In the following proposition, we show the main result of this section concerning the continuity of the value function defined in (1.7) with respect to the cumulative consumption. Let us define the correspondence \(C_{k}^{*}\) by, \[C_{k}^{*}:Q\in\mathcal{T}_{k}\mapsto\operatorname*{arg\,max}_{q\in Adm(t_{k},Q )}\psi(q,x)+\mathbb{E}\big{(}V_{k+1}(X_{t_{k+1}},Q+q)|X_{t_{k}}=x\big{)}. \tag{1.10}\] Note that the correspondence \(C_{k}^{*}\) is the set of solutions of the BDPP (1.7). Then we have the following proposition. **Proposition 1.2**.: _If for all \(k=1,...,n-1\)\(X_{t_{k}}\in\mathbb{L}^{1}_{\mathbb{R}^{d}}(\mathbb{P})\), then for all \(k=0,...,n-1\) and all \(x\in\mathbb{R}^{d}\),_ * _The swing value function_ \(Q\in\mathcal{T}_{k}\mapsto V_{k}(x,Q)\) _is continuous._ * _The correspondence_ \(C^{*}_{k}\) _(defined in (_1.10_)) is non-empty, compact-valued and upper hemicontinuous._ Proof.: Let \(x\in\mathbb{R}^{d}\). For technical convenience, we introduce for all \(0\leq k\leq n-1\) an extended value function \(\mathcal{V}_{k}(x,\cdot)\) defined on the whole real line \[\mathcal{V}_{k}(x,Q):=\left\{\begin{array}{ll}V_{k}(x,Q)&\quad\text{if}\;\;Q \in\mathcal{T}_{k}=\big{[}Q^{down}(t_{k}),Q^{up}(t_{k})\big{]},\\ V_{k}(x,Q^{down}(t_{k}))&\quad\text{if}\;\;Q<Q^{down}(t_{k}),\\ V_{k}(x,Q^{up}(t_{k}))&\quad\text{if}\;\;Q>Q^{up}(t_{k}).\end{array}\right.\] Note that \(V_{k}(x,\cdot)\) is the restriction of \(\mathcal{V}_{k}(x,\cdot)\) on \(\mathcal{T}_{k}\). Propagating continuity over the dynamic programming equation is challenging due to the presence of the variable of interest \(Q\) in both the objective function and the domain in which the supremum is taken. To circumvent this issue, we rely on Berge's maximum theorem. More precisely, we use a backward induction on \(k\) along with Berge's maximum theorem to propagate continuity through the BDPP. For any \(Q\in\mathcal{T}_{n-1}\), we have \(\mathcal{V}_{n-1}(x,Q)=\sup_{q\in Adm(t_{n-1},Q)}\psi(q,x)\) and \(\psi(\cdot,x)\) is continuous since it is linear in \(q\) (see (1.4)). Thus applying Lemma A.1 yields the continuity of \(\mathcal{V}_{n-1}(x,\cdot)\) on \(\mathcal{T}_{n-1}\). Moreover, as \(\mathcal{V}_{n-1}(x,\cdot)\) is constant outside \(\mathcal{T}_{n-1}\) then it is continuous on \((-\infty,Q^{down}(t_{n-1}))\) and \(\big{(}Q^{up}(t_{n-1}),+\infty)\). The continuity at \(Q^{down}(t_{n-1})\) and \(Q^{up}(t_{n-1})\) is straightforward given the construction of \(\mathcal{V}_{n-1}\). Thus \(\mathcal{V}_{n-1}(x,\cdot)\) is continuous on \(\mathbb{R}\). Besides, for all \(Q\in\mathbb{R}\) \[\big{|}\mathcal{V}_{n-1}(X_{t_{n-1}},Q)\big{|}\leq\sup_{Q\in\mathcal{T}_{n-1} }\big{|}V_{n-1}(X_{t_{n-1}},Q)\big{|}\leq q_{\max}\cdot\big{(}|S_{t_{n-1}}|+K \big{)}\in\mathbb{L}^{1}_{\mathbb{R}}(\mathbb{P}).\] We now make the following assumption as an induction assumption: \(\mathcal{V}_{k+1}(x,\cdot)\) is continuous on \(\mathbb{R}\) and there exists a real integrable random variable \(G_{k+1}\) (independent of \(Q\)) such that, almost surely, \(\big{|}\mathcal{V}_{k+1}(X_{t_{k+1}},Q)\big{|}\leq G_{k+1}\). This implies that \((q,Q):[q_{\min},q_{\max}]\times\mathbb{R}\mapsto\psi(q,x)+\mathbb{E}\big{(} \mathcal{V}_{k+1}(X_{t_{k+1}},Q+q)|X_{t_{k}}=x\big{)}\) is continuous owing to the theorem of continuity under integral sign. Thus owing to Proposition A.7 one may apply Berge's maximum theorem and we get that \(\mathcal{V}_{k}(x,\cdot)\) is continuous on \(\mathbb{R}\). In particular \(V_{k}(x,\cdot)\) is continuous on \(\mathcal{T}_{k}\) and the correspondence \(C^{*}_{k}\) is non-empty, compact-valued and upper hemicontinuous. This completes the proof. As a result of the preceding proposition, one may substitute the sup in equation (1.7) with a max. It is worth noting that this provides another proof for the existence of optimal consumption in addition to the one presented in [1]. Furthermore, our proof, compared to that in [1], does not suppose integer volumes. Having addressed the general problem in equation (1.7), we can now focus on solving it which requires to compute the continuation value. ## 2 Approximation of continuation value This section is focused on resolving the dynamic programming equation (1.7). The primary challenge in solving this backward equation is to compute the continuation value, which involves a conditional expectation. A straightforward approach may be to compute this conditional expectation using nested simulations, but this can be time-consuming. Instead, the continuation value may be approximated using either least squares regression (as in [1]) or neural networks. Notice that, it follows from the Markov assumption and the definition of conditional expectation that there exists a measurable function \(\Phi_{k+1}^{Q}\) such that \[\mathbb{E}\big{(}V_{k+1}(X_{t_{k+1}},Q)|X_{t_{k}}\big{)}=\Phi_{k+1}^{Q}(X_{t_{k} }), \tag{2.1}\] where \(\Phi_{k+1}^{Q}\) solves the following minimization problem, \[\inf_{\Phi\in\mathcal{C}^{2}}\Big{|}\Big{|}\mathbb{E}\big{(}V_{k+1}(X_{t_{k+1} },Q)|X_{t_{k}}\big{)}-\Phi\big{(}X_{t_{k}}\big{)}\Big{|}\Big{|}_{2}, \tag{2.2}\] where \(\mathcal{L}^{2}\) denotes the set of all measurable functions that are squared-integrable. Due to the vastness of \(\mathcal{L}^{2}\), the optimization problem (2.2) is quite challenging, if not impossible, to solve in practice. It is therefore common to introduce a parameterized form \(\Phi_{k+1}(\cdot;\theta)\) as a solution to problem (2.2). That is, we need to find the appropriate value of \(\theta\) in a certain parameter space \(\Theta\) such that it solves the following optimization problem: \[\inf_{\theta\in\Theta}\Big{|}\Big{|}\mathbb{E}\big{(}V_{k+1}(X_{t_{k+1}},Q)|X _{t_{k}}\big{)}-\Phi_{k+1}\big{(}X_{t_{k}};\theta\big{)}\Big{|}\Big{|}_{2}. \tag{2.3}\] Solving the latter problem requires to compute the continuation value whereas it is the target amount. But since the conditional expectation is an orthogonal projection, it follows from Pythagoras' theorem, \[\Big{|}\Big{|}V_{k+1}(X_{t_{k+1}},Q)-\Phi_{k+1}(X_{t_{k}};\theta )\Big{|}\Big{|}_{2}^{2}\\ =\Big{|}\Big{|}V_{k+1}(X_{t_{k+1}},Q)-\mathbb{E}\big{(}V_{k+1}(X_ {t_{k+1}},Q)|X_{t_{k}}\big{)}\Big{|}\Big{|}_{2}^{2}+\Big{|}\Big{|}\mathbb{E} \big{(}V_{k+1}(X_{t_{k+1}},Q)|X_{t_{k}}\big{)}-\Phi_{k+1}\big{(}X_{t_{k}}; \theta\big{)}\Big{|}\Big{|}_{2}^{2}. \tag{2.4}\] Thus any \(\theta\) that solves the preceding problem (2.3) also solves the following optimization problem \[\inf_{\theta\in\Theta}\Big{|}\Big{|}V_{k+1}\big{(}X_{t_{k+1}},Q\big{)}-\Phi_{ k+1}\big{(}X_{t_{k}};\theta\big{)}\Big{|}\Big{|}_{2}. \tag{2.5}\] Thus in this paper and when needed, we will indistinguishably consider the two optimization problems. In the next section we discuss the way the function \(\Phi_{k+1}(\cdot;\theta)\) is parametrize depending on whether we use least squares regression or neural networks. Moreover, instead of superscript as in (2.1) we adopt the following notation: \(\Phi_{k+1}^{Q}(\cdot):=\Phi(\cdot;\theta_{k+1}(Q))\) where \(\theta_{k+1}(Q)\in\Theta\) solves the optimization problem (2.3) or equivalently (2.5). We also dropped the under-script as the function \(\Phi\) will be the same for each exercise date, only the parameters \(\theta_{k+1}(Q)\) may differ. ### Least squares approximation In the least squares regression approach, the continuation value is approximated as an orthogonal projection over a subspace spanned by a finite number of squared-integrable functions (see [1]). More precisely, given \(m\in\mathbb{N}^{*}\) functions \(e^{m}(\cdot)=\big{(}e_{1}(\cdot),...,e_{m}(\cdot)\big{)}\), we replace the continuation value involved in (1.7) by an orthogonal projection over the subspace spanned by \(e^{m}\big{(}X_{t_{k}}\big{)}\). This leads to the approximation \(V_{k}^{m}\) of the actual value function \(V_{k}\) which is defined backwardly as follows, \[\left\{\begin{array}{l}V_{k}^{m}\big{(}X_{t_{k}},Q\big{)}=\underset{q\in Adm (t_{k},Q)}{\operatorname{ess\,sup}}\ \psi\big{(}q,X_{t_{k}}\big{)}+\Phi_{m}\big{(}X_{t_{k}};\theta_{k+1,m}(Q+q) \big{)},\\ V_{n-1}^{m}\big{(}X_{t_{n-1}},Q\big{)}=V_{n-1}\big{(}X_{t_{n-1}},Q\big{)}= \underset{q\in Adm(t_{n-1},Q)}{\operatorname{ess\,sup}}\ \psi\big{(}q,X_{t_{n-1}}\big{)},\end{array}\right. \tag{2.6}\] where \(\Phi_{m}\) is defined as follows, \[\Phi_{m}\big{(}X_{t_{k}};\theta_{k+1,m}(Q)\big{)}=\langle\theta_{k+1,m}(Q),e^{ m}(X_{t_{k}})\rangle \tag{2.7}\] with \(\theta_{k+1,m}(Q)\in\Theta_{m}=\mathbb{R}^{m}\) being a vector whose components are coordinates of the orthogonal projection and lies within the following set \[\mathcal{S}_{k}^{m}(Q):=\operatorname*{arg\,inf}_{\theta\in\Theta_{m}}\ \Big{|}\Big{|}V_{k+1}^{m}\big{(}X_{t_{k+1}},Q\big{)}-\langle\theta,e^{m}(X_{t_{ k}})\rangle\Big{|}\Big{|}_{2}. \tag{2.8}\] Solving the optimization problem (2.8) leads to a classic linear regression. In this paper, we will assume that \(e^{m}(\cdot)\) forms linearly independent family so that the set \(\mathcal{S}_{k}^{m}(Q)\) reduces to a singleton parameter \(\theta_{k+1,m}(Q)\) is uniquely defined as: \[\theta_{k+1,m}(Q):=\big{(}A_{m}^{k}\big{)}^{-1}\cdot\mathbb{E}\big{(}V_{k+1}^ {m}(X_{t_{k+1}},Q)e^{m}(X_{t_{k}})\big{)}. \tag{2.9}\] Note that without the latter assumption, \(\mathcal{S}_{k}^{m}(Q)\) may not be a singleton. However, in this case, instead of the inverse matrix \(\big{(}A_{m}^{k}\big{)}^{-1}\), one may consider the Moore-Penrose inverse or pseudo-inverse matrix \(\big{(}A_{m}^{k}\big{)}^{\dagger}\) yielding a minimal norm. In equation (2.9) we used the following notation \[\mathbb{E}\big{(}V_{k+1}^{m}(X_{t_{k+1}},Q)e^{m}(X_{t_{k}})\big{)}:=\begin{bmatrix} \mathbb{E}\big{(}V_{k+1}^{m}(X_{t_{k+1}},Q)e_{1}(X_{t_{k}})\big{)}\\ \mathbb{E}\big{(}V_{k+1}^{m}(X_{t_{k+1}},Q)e_{2}(X_{t_{k}})\big{)}\\ \vdots\\ \mathbb{E}\big{(}V_{k+1}^{m}(X_{t_{k+1}},Q)e_{m}(X_{t_{k}})\big{)}\end{bmatrix} \in\mathbb{R}^{m},\] where \(A_{m}^{k}:=\big{(}(A_{m}^{k})_{i,j}\big{)}_{1\leq i,j\leq m}\) is a (Gram) matrix with entries \[\langle e_{i}(X_{t_{k}}),e_{j}(X_{t_{k}})\rangle_{\mathbb{L}^{2}(\mathbb{P})} =\mathbb{E}\big{(}e_{i}(X_{t_{k}})e_{j}(X_{t_{k}})\big{)}\ \ \ \ 1\leq i,j\leq m. \tag{2.10}\] In practice, to compute vector \(\theta_{k+1,m}(Q)\) we need to simulate \(N\) independent paths \(\big{(}X_{l_{0}}^{[p]},...,X_{t_{n-1}}^{[p]}\big{)}_{1\leq p\leq N}\) and use Monte Carlo to evaluate the expectations involved (see equations (2.9) and (2.10)). This leads to a second approximation which is a Monte Carlo approximation. For this second approximation, we define the value function \(V_{k}^{m,N}\) from equation (2.6) where we replace the expectations by their empirical counterparts \[\left\{\begin{array}{l}V_{k}^{m,N}\big{(}X_{t_{k}},Q\big{)}= \operatorname*{ess\,sup}_{q\in Adm(t_{k},Q)}\ \psi\big{(}q,X_{t_{k}}\big{)}+\Phi_{m}\big{(}X_{t_{k}};\theta_{k+1,m,N}(Q+q) \big{)},\\ V_{n-1}^{m,N}\big{(}X_{t_{n-1}},Q\big{)}=V_{n-1}\big{(}X_{t_{n-1}},Q\big{)} \end{array}\right. \tag{2.11}\] with \[\theta_{k,m,N}(Q)=\big{(}A_{m,N}^{k}\big{)}^{-1}\frac{1}{N}\sum_{p=1}^{N}V_{k +1}^{m,N}(X_{t_{k+1}}^{[p]},Q)e^{m}(X_{t_{k}}^{[p]}), \tag{2.12}\] using the notation \[\frac{1}{N}\sum_{p=1}^{N}V_{k+1}^{m,N}(X_{t_{k+1}}^{[p]},Q)e^{m}(X_{t_{k}}^{[p] }):=\begin{bmatrix}\frac{1}{N}\sum_{p=1}^{N}V_{k+1}^{m,N}(X_{t_{k+1}}^{[p]},Q) e_{1}(X_{t_{k}}^{[p]})\\ \frac{1}{N}\sum_{p=1}^{N}V_{k+1}^{m,N}(X_{t_{k+1}}^{[p]},Q)e_{2}(X_{t_{k}}^{[p] })\\ \vdots\\ \frac{1}{N}\sum_{p=1}^{N}V_{k+1}^{m,N}(X_{t_{k+1}}^{[p]},Q)e_{m}(X_{t_{k}}^{[p] })\end{bmatrix}\in\mathbb{R}^{m}\] and \(A_{m,N}^{k}:=\big{(}(A_{m,N}^{k})_{i,j}\big{)}_{1\leq i,j\leq m}\) is a \(m\times m\) (Gram) matrix whose components are \[\frac{1}{N}\sum_{p=1}^{N}\;e_{i}\big{(}X_{t_{k}}^{[p]}\big{)}e_{j}\big{(}X_{t_{k} }^{[p]}\big{)}\;\;\;\;\;1\leq i,j\leq m. \tag{2.13}\] This paper investigates a modified version of the least squares method proposed in [2]. In their approach, the value function at each time step is the result of two steps. First, they compute the optimal control which is an admissible control that maximizes the value function (2.11) along with Monte Carlo simulations. Then, given the optimal control, they compute the value function by summing up all cash-flows from the considered exercise date until the maturity. Recall that we proceed backwardly so that, in practice, it is assumed that at a given exercise date \(t_{k}\), we already have determined optimal control from \(t_{k+1}\) to \(t_{n-1}\); so that optimal cash flows at theses dates may be computed. However, our method directly replaces the continuation value with a linear combination of functions, and the value function is the maximum, over admissible volumes, of the current cash flow plus the latter combination of functions. The main difference between both approaches lies in the following. The value function computed in [2] corresponds to actual realized cash flows whereas the value function in our case does not. However, as recommended in their original paper [15], after having estimated optimal control backwardly, a forward valuation has to be done in order to eliminate biases. By doing so, our method and that proposed in [2] correspond to actual realized cash flows. Thus both approximations meet. Our convergence analysis of the least squares approximation will require some technical assumptions we state below. #### Main assumptions \(\boldsymbol{\mathcal{H}_{1}^{LS}}\): For all \(k=0,\ldots,n-1\), the sequence \(\left(e_{i}\left(X_{t_{k}}\right)\right)_{i\geq 1}\) is total in \(\mathbb{L}^{2}\big{(}\sigma(X_{t_{k}})\big{)}\). \(\boldsymbol{\mathcal{H}_{2}^{LS}}\): For all \(k=0,\ldots,n-1\), almost surely, \(e_{0}(X_{t_{k}}),\ldots,e_{m}(X_{t_{k}})\) are linearly independent. This assumption ensures the Gram matrix \(A_{m}^{k}\) is non-singular. Moreover, this assumption allows to guarantee the matrix \(A_{m,N}^{k}\) is non-singular for \(N\) large enough. Indeed, by the strong law of large numbers, almost surely \(A_{m,N}^{k}\to A_{m}^{k}\in\mathbb{G}L_{m}(\mathbb{R})\) (as \(N\to+\infty\)) with the latter set being an open set. \(\boldsymbol{\mathcal{H}_{3,r}}\): For all \(k=0,\ldots,n-1\), the random vector \(X_{t_{k}}\) has finite moments at order \(r\). \(\boldsymbol{\mathcal{H}_{3,\infty}}\) will then denote the existence of moments at any order. \(\boldsymbol{\mathcal{H}_{4,r}^{LS}}\): For all \(k=0,\ldots,n-1\) and for all \(j=1,\ldots,m\) the random variable \(e_{j}(X_{t_{k}})\) has finite moments at order \(r\). Likewise, \(\boldsymbol{\mathcal{H}_{4,\infty}^{LS}}\) will then denote the existence of moments at any order. If assumption \(\boldsymbol{\mathcal{H}_{3,\infty}}\) holds, one may replace assumption \(\boldsymbol{\mathcal{H}_{4,r}^{LS}}\) by an assumption of linear or polynomial growth of functions \(e_{j}(\cdot)\) with respect to the Euclidean norm. Before proceeding, note the following noteworthy comment that will be relevant in the subsequent discussion. Specifically, we would like to remind the reader that the continuity property of the true value function \(V_{k}\) with respect to cumulative consumption, as stated in Proposition 1.2, also applies to the approximated value function \(V_{k}^{m}\) involved in the least squares regression. **Remark 2.1**.: _If we assume that \(\boldsymbol{\mathcal{H}_{3,2r}}\) and \(\boldsymbol{\mathcal{H}_{4,2r}^{LS}}\) hold true for some \(r\geq 1\), then one may show, by a straightforward backward induction, that the functions_ \[Q\in\mathcal{T}_{k+1}\mapsto\mathbb{E}\Big{(}\big{|}V_{k+1}^{m}(X_{t_{k+1}},Q) e^{m}(X_{t_{k}})\big{|}^{r}\Big{)}\;\;\;\text{or}\;\;\;V_{k+1}^{m}(X_{t_{k+1}},Q)\] _are continuous. If only assumption \(\boldsymbol{\mathcal{H}_{3,r}}\) holds true then \(V_{k+1}(X_{t_{k+1}},\cdot)\) is continuous and there exists a random variable \(G_{k+1}\in\mathbb{L}_{\mathbb{R}}^{r}\big{(}\mathbb{P}\big{)}\) (independent of \(Q\)) such that \(V_{k+1}(X_{t_{k+1}},\cdot)\leq G_{k+1}\)._ Instead of using classic functions as regression functions and projecting the swing value function onto the subspace spanned by these regression functions, an alternative approach consists in using neural networks. Motivated by the function approximation capacity of deep neural networks, as quantified by the _Universal Approximation Theorem_ (UAT), our goal is to explore whether a neural network can replace conventional regression functions. In the following section, we introduce a methodology based on neural networks that aims to approximate the continuation value. ### Neural network approximation The goal of a neural network is to approximate complex a function \(\Phi:\mathbb{R}^{d}\rightarrow\mathbb{R}^{\ell}\) by a parametric function \(\Phi(\cdot;\theta)\) where parameters \(\theta\) (or weights of the neural network) have to be optimized in a way that the "distance" between the two functions \(\Phi\) and \(\Phi(\cdot;\theta)\) is as small as possible. A neural network can approximate a wide class of complex functions (see [15, 16, 17]). A neural network is made of nodes connected to one another where a column of nodes forms a layer (when there are more than one layer in the neural network architecture we speak of a deep neural network). The outermost (see diagram 1) are the input and output layers and all those in between are called the hidden layers. The connection between the input and output layers through hidden layers is made by means of linear functions and activation functions (non-linear functions). From a mathematical point of view, a neural network can be written as \[x\in\mathbb{R}^{d}\mapsto\Phi(x;\theta):=\psi\circ a_{I}^{\theta_{I}}\circ \phi_{q_{I-1}}\circ a_{I-1}^{\theta_{I-1}}\circ\ldots\circ\phi_{q_{1}}\circ a_ {1}^{\theta_{1}}(x)\in\mathbb{R}^{\ell}, \tag{2.14}\] where \(\rhd\)\(I\) is the number of hidden layers representing the depth of the neural network. \(\rhd\) Each layer has weights \(\mathcal{W}\) and bias \(b\). For all \(2\leq i\leq I\), \[x\in\mathbb{R}^{q_{i-1}}\mapsto a_{i}^{\theta_{i}}(x)=\mathcal{W}_{i}\cdot x+b _{i}\in\mathbb{R}^{q_{i}}\quad\text{ with }\quad\theta_{i}=(\mathcal{W}_{i},b_{i})\in\mathbb{R}^{q_{i-1}\times q_{i}} \times\mathbb{R}^{q_{i}},\] and \[x\in\mathbb{R}^{d}\mapsto a_{1}^{\theta_{1}}(x)=\mathcal{W}_{1}\cdot x+b_{1} \in\mathbb{R}^{q_{1}}\quad\text{ with }\quad\theta_{1}=(\mathcal{W}_{1},b_{1})\in\mathbb{R}^{d\times q_{1}}\times \mathbb{R}^{q_{1}}.\] Figure 1: Illustration of (deep) neural network architecture with \(d=3,\ell=1\) \(\rhd q_{1},\ldots,q_{I}\) are positive integers denoting the number of nodes per hidden layer and representing the width of the neural network. \(\rhd\)\((\phi_{q_{i}})_{1\leq i\leq I-1}\) are non-linear functions called activation functions and are applied component wise. \(\rhd\)\(\psi\) is the activation function for the output layer. For the sake of simpler notation, we embed all the parameters of the different layers in a unique high dimensional parameter \(\theta=\big{(}\theta_{1},\ldots,\theta_{I}\big{)}\in\mathbb{R}^{N_{q}}\) with \(N_{q}=\sum_{i=1}^{I}q_{i-1}\cdot(1+q_{i})\) (with \(q_{0}=d\)). In order to study neural network approximation, we take the same notations as in [11]. We denote by \(\mathcal{N}\mathcal{N}_{\infty}\) the set of all neural networks of form (2.14). Then we consider, for some integer \(m\geq 1\), \(\mathcal{N}\mathcal{N}_{m}\) the set of neural networks of form (2.14) with at most \(m\) nodes per hidden layer and bounded parameters. More precisely, we consider \[\Theta_{m}=\Big{\{}\mathbb{R}^{d}\times\mathbb{R}^{m}\times\big{(}\mathbb{R}^{ m}\times\mathbb{R}^{m}\big{)}^{I-2}\times\mathbb{R}^{m}\times\mathbb{R}~{}:~{} ~{}|\theta|\leq\gamma_{m}\Big{\}} \tag{2.15}\] which denotes the set of all parameters (bounded by \(\gamma_{m}\)) of a neural network with at most \(m\) nodes per hidden layer. \((\gamma_{m})_{m\geq 2}\) is an increasing and non-bounded (real) sequence. Thus \(\mathcal{N}\mathcal{N}_{m}\) is defined as the set of all neural networks which parameters lie in \(\Theta_{m}\), \[\mathcal{N}\mathcal{N}_{m}=\big{\{}\Phi(\cdot;\theta):\mathbb{R}^{d}\to \mathbb{R};\theta\in\Theta_{m}\big{\}}. \tag{2.16}\] Note that \(\mathcal{N}\mathcal{N}_{\infty}=\bigcup_{m\in\mathbb{N}}\mathcal{N} \mathcal{N}_{m}\). In this paper, we consider the approximation of the continuation value using neural network. This leads to an approximated value function \(V_{k}^{m}\) backwardly defined by \[\left\{\begin{array}{l}V_{k}^{m}\big{(}X_{t_{k}},Q\big{)}=\operatorname*{ ess\,sup}_{q\in Adm(t_{k},Q)}\psi\big{(}q,X_{t_{k}}\big{)}+\Phi_{m}\big{(}X_{t_{k} };\theta_{k+1,m}(Q+q)\big{)},\\ V_{n-1}^{m}\big{(}X_{t_{n-1}},Q\big{)}=V_{n-1}\big{(}X_{t_{n-1}},Q\big{)}, \end{array}\right. \tag{2.17}\] where \(\Phi_{m}(\cdot;\theta)\) denotes a function lying within \(\mathcal{N}\mathcal{N}_{m}\) with \(\theta\in\Theta_{m}\). Thus \(\theta_{k+1,m}(Q)\) belongs to the following set \[\mathcal{S}_{k}^{m}(Q):=\operatorname*{arg\,inf}_{\theta\in\Theta_{m}}\; \Big{|}\Big{|}V_{k+1}^{m}(X_{t_{k+1}},Q)-\Phi_{m}\big{(}X_{t_{k}};\theta) \Big{|}\Big{|}_{2}. \tag{2.18}\] To analyze the convergence of the neural network approximation we will rely on their powerful approximation ability. The latter is stated by the _Universal Approximation Theorem_. **Theorem 2.2** (Universal Approximation Theorem).: _Assume that the activation functions in (2.14) are not constant and bounded. Let \(\mu\) denote a probability measure on \(\mathbb{R}^{d}\), then for any \(I\geq 2\), \(\mathcal{N}\mathcal{N}_{\infty}\) is dense in \(\mathbb{L}(\mathbb{R}^{d},\mu)\)._ **Remark 2.3**.: _As stated in [11], Theorem 2.2 can be seen as follows. For any (real) squared-integrable random variable \(Y\) defined on a measurable space, there exists a sequence \((\theta_{m})_{m\geq 2}\in\prod_{m=2}^{\infty}\Theta_{m}\) such that \(\lim_{p\to\infty}\big{|}\big{|}Y-\Phi_{m}(X;\theta)\big{|}\big{|}_{2}\) for some \(\mathbb{R}^{d}\)-valued random vector \(X\). Thus, if for all \(m\geq 2\), \(\theta_{m}\) solves_ \[\inf_{\theta\in\Theta_{m}}\;\big{|}\big{|}\Phi_{m}(X;\theta)-Y\big{|}\big{|}_{2},\] _then the sequence \(\big{(}\Phi_{m}(X;\theta_{m})\big{)}_{m\geq 2}\) converges to \(\mathbb{E}(Y|X)\) in \(\mathbb{L}^{2}(\mu)\)._ The universal approximation capacity of neural networks had been widely studied in the literature [14, 15, 16]. Some quantitative error bounds have been proved when the function to approximate is sufficiently smooth. A brief overview is presented in the following remark. **Remark 2.4** (UAT error bounds).: _When the weighted average of the Fourier representation of the function to approximate is bounded, an error bound of the convergence in Remark 2.3 of order \(\mathcal{O}(m^{-1/2})\) had been shown in [1, 1]. It may appears that the dimension of the problem does not degrade the convergence rate but as discussed by the authors, this may be hidden in the Fourier representation. In [1] it has been proved that, when the activation functions are infinitely continuously differentiables and the function to approximate is \(p\)-times continuously differentiable and Lipschitz, then the sup-norm of the approximation error on every compact set is bounded by a term of order \(\mathcal{O}\big{(}m^{-(p+1)/d}\big{)}\). For a more detailed overview on quantitative error bounds, we refer the reader to [1]._ Note that, as in the least squares method, in practice, we simulate \(N\) independent paths \(\big{(}X_{t_{0}}^{[p]},...,X_{t_{n-1}}^{[p]}\big{)}_{1\leq p\leq N}\) and use Monte Carlo approximation to compute the swing value function. For that purpose, we backwardly define the value function \(V_{k}^{m,N}\) by, \[\left\{\begin{array}{l}V_{k}^{m,N}\big{(}X_{t_{k}}^{[p]},Q\big{)}=\operatorname {ess\,sup}_{q\in Adm(t_{k},Q)}\ \ \psi\big{(}q,X_{t_{k}}^{[p]}\big{)}+\Phi_{m}\big{(}X_{t_{k}}^{[p]};\theta_{k+1, m,N}(Q+q)\big{)},\\ V_{n-1}^{m,N}\big{(}X_{t_{n-1}}^{[p]},Q\big{)}=V_{n-1}\big{(}X_{t_{n-1}}^{[p]},Q\big{)},\end{array}\right. \tag{2.19}\] where \(\theta_{k+1,m,N}(Q)\) lies within the following set, \[\mathcal{S}_{k}^{m,N}(Q):=\operatorname*{arg\,inf}_{\theta\in\Theta_{m}}\ \frac{1}{N}\sum_{p=1}^{N}\Big{|}V_{k+1}^{m,N}(X_{t_{k+1}}^{[p]},Q)-\Phi_{m} \big{(}X_{t_{k}}^{[p]};\theta\big{)}\Big{|}^{2}. \tag{2.20}\] Note that sets \(\mathcal{S}_{k}^{m}(Q)\) or \(\mathcal{S}_{k}^{m,N}(Q)\) (respectively defined in equations (2.18) and (2.20)) generally does not reduces to a singleton. Thus hereafter, the notation \(\theta_{k+1,m}(Q)\) or \(\theta_{k+1,m,N}(Q)\) will denote an element of the corresponding set \(\mathcal{S}_{k}^{m}(Q)\) or \(\mathcal{S}_{k}^{m,N}(Q)\). ## 3 Convergence analysis We conduct a convergence analysis by following a similar approach as in [1, 1, 1]. Our initial focus is to establish a convergence result as the "architecture" used to approximate the continuation value increases. By architecture, we mean either regression functions (in the context of least squares approximation) or neural networks units per layer. Then, we fix the value of \(m\) (representing the architecture's size) and examine the associated Monte Carlo approximation. Let us start with the first step. ### Convergence with respect to the number of approximation functions We focus on the approximations (2.6) and (2.17) of the BDPP (1.7). In this section, we do not restrict ourselves to the bang-bang setting. That is, for both approximation methods, we consider arbitrary volume constraints (not limited to integers). #### 3.1.1 Least squares approximation We start by analyzing the first approximation in the least squares setting (2.6). We show the convergence of the approximated value function \(V_{k}^{m}\) as \(m\) tends to infinity. To state this property we need the following result. **Proposition 3.1**.: _Let \(m\) be a positive integer. Assume \(\boldsymbol{\mathcal{H}_{2}^{LS}}\) and \(\boldsymbol{\mathcal{H}_{3,2}}\) hold true. Then, for all \(k=0,\ldots,n-2\), the function_ \[Q\mapsto\big{|}\big{|}\Phi_{m}(X_{t_{k}};\widehat{\theta}_{k+1,m}(Q))-\mathbb{ E}(V_{k+1}(X_{t_{k+1}},Q)|X_{t_{k}})\big{|}\big{|}_{2}\] _is continuous on \(\mathcal{T}_{k+1}\), where \(\Phi_{m}\) is defined in (2.7) and \(\tilde{\theta}_{k+1,m}(Q)\) solves the "theoretical" optimization problem_ \[\inf_{\theta\in\Theta_{m}}\Big{|}\Big{|}V_{k+1}\big{(}X_{t_{k+1}},Q \big{)}-\Phi_{m}(X_{t_{k}};\theta)\Big{|}\Big{|}_{2}. \tag{3.1}\] Proof.: Keeping in mind relation (2.4), it suffices to prove that the functions, \[Q\mapsto\big{|}\big{|}V_{k+1}(X_{t_{k+1}},Q)-\mathbb{E}(V_{k+1}( X_{t_{k+1}},Q)|X_{t_{k}})\big{|}\big{|}_{2}^{2} \tag{3.2}\] and \[Q\mapsto\big{|}\big{|}V_{k+1}(X_{t_{k+1}},Q)-\Phi_{m}(X_{t_{k}}; \tilde{\theta}_{k+1,m}(Q))\big{|}\big{|}_{2}^{2} \tag{3.3}\] are continuous. Let us start with the first function. Let \(Q\in\mathcal{T}_{k+1}\) and consider a sequence \(\big{(}Q_{n}\big{)}_{n}\) which converges to \(Q\). We know (as pointed out in Remark 2.1) that assumption \(\boldsymbol{\mathcal{H}_{3,2}}\) entails that \(V_{k+1}(X_{t_{k+1}},\cdot)\) is continuous and there exists \(G_{k+1}\in\mathbb{L}_{\mathbb{R}}^{2}\big{(}\mathbb{P}\big{)}\) (independent of \(Q\)) such that \(V_{k+1}(X_{t_{k+1}},\cdot)\leq G_{k+1}\). Thus the Lebesgue dominated convergence theorem implies that, \[\lim_{n\to+\infty}\ \big{|}\big{|}V_{k+1}(X_{t_{k+1}},Q_{n})- \mathbb{E}(V_{k+1}(X_{t_{k+1}},Q_{n})|X_{t_{k}})\big{|}\big{|}_{2}^{2}=\big{|} \big{|}V_{k+1}(X_{t_{k+1}},Q)-\mathbb{E}(V_{k+1}(X_{t_{k+1}},Q)|X_{t_{k}}) \big{|}\big{|}_{2}^{2}\] yielding the continuity of the function defined in (3.2). We now prove the continuity of the second function defined in (3.3). Using assumption \(\boldsymbol{\mathcal{H}_{2}^{LS}}\), it follows from Proposition A.3 that, \[\Big{|}\Big{|}\Phi_{m}(X_{t_{k}};\tilde{\theta}_{k+1,m}(Q))-V_{k+1 }(X_{t_{k+1}},Q)\Big{|}\Big{|}_{2}^{2}=\frac{G\big{(}V_{k+1}(X_{t_{k+1}},Q),e _{1}(X_{t_{k}}),\ldots,e_{m}(X_{t_{k}})\big{)}}{G\big{(}e_{1}(X_{t_{k}}), \ldots,e_{m}(X_{t_{k}})\big{)}}\] where \(G(x_{1},\ldots,x_{n})\) denotes the Gram determinant associated to the canonical \(\mathbb{L}^{2}\big{(}\mathbb{P}\big{)}\) inner product. Since assumption \(\boldsymbol{\mathcal{H}_{3,2}}\) entails the continuity of \(V_{k+1}(X_{t_{k+1}},\cdot)\), then owing to the continuity of the determinant, one may conclude that \(Q\in\mathcal{T}_{k+1}\mapsto\Big{|}\Big{|}\Phi_{m}(X_{t_{k}};\tilde{\theta}_{k +1,m}(Q))-V_{k+1}(X_{t_{k+1}},Q)\Big{|}_{2}^{2}\) is continuous as a composition of continuous functions. This completes the proof. The preceding proposition allows us to show our first convergence result stated in the following proposition. **Proposition 3.2**.: _Under assumptions \(\boldsymbol{\mathcal{H}_{1}^{LS}}\), \(\boldsymbol{\mathcal{H}_{2}^{LS}}\) and \(\boldsymbol{\mathcal{H}_{3,2}}\), we have for all \(0\leq k\leq n-1\),_ \[\lim_{m\to+\infty}\ \sup_{Q\in\mathcal{T}_{k}}\ \Big{|}\Big{|}V_{k}^{m}(X_{t_{k}},Q)-V_{k}(X_{t_{k}},Q) \Big{|}\Big{|}_{2}=0.\] Proof.: We proceed by a backward induction on \(k\). We have, almost surely, \(V_{n-1}^{m}(X_{t_{n-1}},Q)=V_{n-1}(X_{t_{n-1}},Q)\) for any \(Q\in\mathcal{T}_{n-1}\) and therefore the proposition holds true for \(k=n-1\). Let us suppose it holds for \(k+1\). For all \(Q\in\mathcal{T}_{k}\) using the inequality \(|\underset{i\in I}{\sup}\ a_{i}-\underset{i\in I}{\sup}\ b_{i}|\ \leq\ \underset{i\in I}{\sup}\ |a_{i}-b_{i}|\), we get, \[\big{|}V_{k}^{m}(X_{t_{k}},Q)-V_{k}(X_{t_{k}},Q)\big{|}^{2}\leq \underset{q\in Adm(t_{k},Q)}{\operatorname{ess\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! \[\big{|}\big{|}V_{k}^{m}(X_{t_{k}},Q)-V_{k}(X_{t_{k}},Q)\big{|}\big{|} _{2}^{2}\leq\mathbb{E}\left(\operatorname*{ess\,sup}_{q\in Adm(t_{k},Q)}\Big{|} \Phi_{m}(X_{t_{k}};\theta_{k+1,m}(Q+q))-\mathbb{E}(V_{k+1}(X_{t_{k+1}},Q+q)|X_{t _{k}})\Big{|}^{2}\right). \tag{3.4}\] To interchange the essential supremum with the expectation, we rely on the bifurcation property. For all \(q\in Adm(t_{k},Q)\), consider \[A_{k}^{m}(Q,q):=\Big{|}\Phi_{m}(X_{t_{k}};\theta_{k+1,m}(Q+q))- \mathbb{E}(V_{k+1}(X_{t_{k+1}},Q+q)|X_{t_{k}})\Big{|}^{2}.\] Then for all \(q_{1},q_{2}\in Adm(t_{k},Q)\) define the following random variable \[q_{A}^{*}=q_{1}\cdot 1_{\{A_{k}^{m}(Q,q_{1})\geq A_{k}^{m}(Q,q_{2})\}}+q_{2} \cdot 1_{\{A_{k}^{m}(Q,q_{1})<A_{k}^{m}(Q,q_{2})\}}. \tag{3.5}\] It follows from the definition of \(\Phi_{m}\) in (2.7) and that of the conditional expectation that \(A_{k}^{m}(Q,q)\) is \(\sigma\left(X_{t_{k}}\right)\)-measurable for all \(q\in Adm(t_{k},Q)\). Thus using (3.5) yields \(q_{A}^{*}\in Adm(t_{k},Q)\) and \(A_{k}^{m}(Q,q_{A}^{*})=\max\left(A_{k}^{m}(Q,q_{1}),A_{k}^{m}(Q,q_{2})\right)\). Therefore one may use the bifurcation property in (3.4) and we get, \[\big{|}\big{|}V_{k}^{m}(X_{t_{k}},Q)-V_{k}(X_{t_{k}},Q)\big{|} \big{|}_{2}^{2} \leq\sup_{q\in Adm(t_{k},Q)}\big{|}\big{|}\Phi_{m}(X_{t_{k}}; \theta_{k+1,m}(Q+q))-\mathbb{E}(V_{k+1}(X_{t_{k+1}},Q+q)|X_{t_{k}})\big{|} \big{|}_{2}^{2}\] \[\leq 2\sup_{q\in Adm(t_{k},Q)}\big{|}\big{|}\Phi_{m}(X_{t_{k}}; \theta_{k+1,m}(Q+q))-\Phi_{m}(X_{t_{k}};\tilde{\theta}_{k+1,m}(Q+q))\big{|} \big{|}_{2}^{2}\] \[+2\sup_{q\in Adm(t_{k},Q)}\big{|}\big{|}\Phi_{m}(X_{t_{k}};\tilde {\theta}_{k+1,m}(Q+q))-\mathbb{E}(V_{k+1}(X_{t_{k+1}},Q+q)|X_{t_{k}})\big{|} \big{|}_{2}^{2} \tag{3.6}\] where in the last inequality, we used Minkowski inequality. \(\tilde{\theta}_{k+1,m}(Q+q)\) solves the "theoretical" optimization problem (3.1). Note that in the latter problem, we introduced the actual (not known) value function \(V_{k+1}\) unlike in equation (2.8). This is just a theoretical tool as the preceding optimization problem cannot be solved since we do not know the actual value function \(V_{k+1}\). Thus taking the supremum in (3.6) yields, \[\sup_{Q\in\mathcal{T}_{k}}\big{|}\big{|}V_{k}^{m}(X_{t_{k}},Q)-V _{k}(X_{t_{k}},Q)\big{|}\big{|}_{2}^{2}\] \[\leq 2\sup_{Q\in\mathcal{T}_{k+1}}\big{|}\big{|}\Phi_{m}(X_{t_{k} };\theta_{k+1,m}(Q))-\Phi_{m}(X_{t_{k}};\tilde{\theta}_{k+1,m}(Q))\big{|} \big{|}_{2}^{2}\] \[+2\sup_{Q\in\mathcal{T}_{k+1}}\big{|}\Phi_{m}(X_{t_{k}};\tilde{ \theta}_{k+1,m}(Q))-\mathbb{E}(V_{k+1}(X_{t_{k+1}},Q)|X_{t_{k}})\big{|}\big{|} _{2}^{2}, \tag{3.7}\] where we used the fact that, for all \(Q\in\mathcal{T}_{k}\) and all \(q\in Adm(t_{k},Q)\) we have \(Q+q\in\mathcal{T}_{k+1}\). Besides, recall that \(\Phi_{m}(X_{t_{k}};\tilde{\theta}_{k+1,m}(Q))\) and \(\Phi_{m}(X_{t_{k}};\theta_{k+1,m}(Q))\) are orthogonal projections of \(V_{k+1}(X_{t_{k+1}},Q)\) and \(V_{k+1}^{m}(X_{t_{k+1}},Q)\) on the subspace spanned by \(e^{m}(X_{t_{k}})\). Then knowing that the orthogonal projection is \(1\)-Lipschitz, we have \[\sup_{Q\in\mathcal{T}_{k+1}}\big{|}\big{|}\Phi_{m}(X_{t_{k}};\theta_{k+1,m}(Q))- \Phi_{m}(X_{t_{k}};\tilde{\theta}_{k+1,m}(Q))\big{|}\big{|}_{2}^{2}\leq\sup_{Q \in\mathcal{T}_{k+1}}\big{|}\big{|}V_{k+1}^{m}(X_{t_{k+1}},Q)-V_{k+1}(X_{t_{k +1}},Q)\big{|}\big{|}_{2}^{2}.\] Thanks to the induction assumption, the right hand side of the last inequality converges to \(0\) as \(m\to+\infty\), so that, \[\sup_{Q\in\mathcal{T}_{k+1}}\big{|}\big{|}\Phi_{m}(X_{t_{k}};\theta_{k+1,m}(Q))- \Phi_{m}(X_{t_{k}};\tilde{\theta}_{k+1,m}(Q))\big{|}\big{|}_{2}^{2}\ \xrightarrow[m\to+\infty]{}0. \tag{3.8}\] It remains to prove that \[\lim_{m\to+\infty}\ \sup_{Q\in\mathcal{T}_{k+1}}\Big{|}\big{|}\Phi_{m}(X_{t_{k}}; \tilde{\theta}_{k+1,m}(Q))-\mathbb{E}(V_{k+1}(X_{t_{k+1}},Q)|X_{t_{k}})\big{|} \Big{|}_{2}^{2}=0. \tag{3.9}\] To achieve, this we rely on Dini's lemma whose assumptions hold true owing to the three following facts. #### Pointwise convergence It follows from assumption \(\boldsymbol{\mathcal{H}_{1}^{LS}}\) that, for any \(Q\in\mathcal{T}_{k+1}\), \[\lim_{m\to+\infty}\Big{|}\Big{|}\Phi_{m}(X_{t_{k}};\tilde{\theta}_{k+1,m}(Q))- \mathbb{E}(V_{k+1}(X_{t_{k+1}},Q)|X_{t_{k}})\Big{|}\Big{|}_{2}^{2}=0.\] #### Continuity The continuity of \(Q\mapsto\Big{|}\Big{|}\Phi_{m}(X_{t_{k}};\tilde{\theta}_{k+1,m}(Q))-\mathbb{E }(V_{k+1}(X_{t_{k+1}},Q)|X_{t_{k}})\Big{|}\Big{|}_{2}^{2}\) is given by Proposition 3.1 under assumptions \(\boldsymbol{\mathcal{H}_{2}^{LS}}\) and \(\boldsymbol{\mathcal{H}_{3,2}}\). #### Monotony Denote by \(F_{m}^{k}:=\text{span}\left(e_{1}(X_{t_{k}}),\ldots,e_{m}(X_{t_{k}})\right)\). Then it is straightforward that for any \(m\geq 1\), \(F_{m}^{k}\subseteq F_{m+1}^{k}\). So that, \[\Big{|}\Big{|}\Phi_{m}(X_{t_{k}};\tilde{\theta}_{k+1,m}(Q))- \mathbb{E}(V_{k+1}(X_{t_{k+1}},Q)|X_{t_{k}})\Big{|}\Big{|}_{2}^{2} =\inf_{Y\in F_{m}^{k}}\ \Big{|}\Big{|}\mathbb{E}\big{(}V_{k+1}(X_{t_{k+1}},Q)|X_{t_{k}} \big{)}-Y\Big{|}\Big{|}_{2}^{2}\] \[\geq\inf_{Y\in F_{m+1}^{k}}\ \Big{|}\Big{|}\mathbb{E}\big{(}V_{k+1}(X_{t _{k+1}},Q)|X_{t_{k}}\big{)}-Y\Big{|}\Big{|}_{2}^{2}\] \[=\Big{|}\Big{|}\Phi_{m+1}(X_{t_{k}};\tilde{\theta}_{k+1,m+1}(Q))- \mathbb{E}(V_{k+1}(X_{t_{k+1}},Q)|X_{t_{k}})\Big{|}\Big{|}_{2}^{2}.\] Thus the sequence, \[\left(\Big{|}\Big{|}\Phi_{m}(X_{t_{k}};\tilde{\theta}_{k+1,m}(Q))-\mathbb{E}( V_{k+1}(X_{t_{k+1}},Q)|X_{t_{k}})\Big{|}\Big{|}_{2}^{2}\right)_{m\geq 1}\] is non-increasing. From the three preceding properties, one may apply Dini lemma yielding the desired result (3.9). Finally, combining (3.8) and (3.9) in (3.7) yields, \[\lim_{m\to+\infty}\ \sup_{Q\in\mathcal{T}_{k}}\big{|}\big{|}V_{k}^{m}(X_{t_{k}},Q)-V_{k}(X_{t_{k}},Q)\big{|}\big{|}_{2}^{2}=0.\] This completes the proof. #### 3.1.2 Neural network approximation We now consider the approximation of the continuation value when using neural network. We prove a similar result as in Proposition 3.2, when the number of units per hidden layer increases. To achieve this, we need the following assumptions. \(\boldsymbol{\mathcal{H}_{1}^{\mathcal{NN}}}\): For every \(m\geq 2\), there exists \(q\geq 1\) such that for every \(\theta\in\Theta_{m}\), \(\Phi_{m}(\cdot;\theta)\) has \(q\)-polynomial growth uniformly in \(\theta\). \(\boldsymbol{\mathcal{H}_{2}^{\mathcal{NN}}}\): For any \(0\leq k\leq n-1\), a.s. the random functions \(\theta\in\Theta_{m}\mapsto\Phi_{m}\big{(}X_{t_{k}};\theta\big{)}\) are continuous. Owing to the Heine theorem, the compactness of \(\Theta_{m}\) yields the uniform continuity. **Proposition 3.3**.: _Assume \(\boldsymbol{\mathcal{H}_{1}^{\mathcal{NN}}}\), \(\boldsymbol{\mathcal{H}_{2}^{\mathcal{NN}}}\) and \(\boldsymbol{\mathcal{H}_{3,2q}}\) (with \(q\) involved in assumption \(\boldsymbol{\mathcal{H}_{1}^{\mathcal{NN}}}\)) hold true. Then, for all \(0\leq k\leq n-1\),_ \[\lim_{m\to+\infty}\;\sup_{Q\in\mathcal{T}_{k}}\;\Big{|}\Big{|}V_{k}^{m}(X_{t_ {k}},Q)-V_{k}(X_{t_{k}},Q)\Big{|}\Big{|}_{2}=0.\] Proof.: We proceed by a backward induction on \(k\). For \(k=n-1\), we have \(V_{n-1}^{m}(X_{t_{n-1}},Q)=V_{n-1}(X_{t_{n-1}},Q)\) and therefore the proposition holds true. Let us suppose it holds for \(k+1\). In the spirit of the beginning of the proof of Proposition 3.2, we have for all \(Q\in\mathcal{T}_{k}\) using the inequality: \(\left|\sup_{i\in I}\,a_{i}-\sup_{i\in I}\,b_{i}\right|\;\leq\;\sup_{i\in I} \,|a_{i}-b_{i}|\) and triangle inequality, \[\big{|}\big{|}V_{k}^{m}(X_{t_{k}},Q)-V_{k}(X_{t_{k}},Q)\big{|} \big{|}_{2}^{2}\leq\mathbb{E}\left(\operatorname*{ess\,sup}_{q\in Adm(t_{k},Q )}\big{|}\Phi_{m}\big{(}X_{t_{k}};\theta_{k+1,m}(Q+q)\big{)}-\mathbb{E}\big{(} V_{k+1}(X_{t_{k+1}},Q+q)|X_{t_{k}}\big{)}\big{|}^{2}\right). \tag{3.10}\] Then we aim to apply the bifurcation property. For all \(q\in Adm(t_{k},Q)\), consider, \[A_{k}^{m}(Q,q)=\Big{|}\Phi_{m}\big{(}X_{t_{k}};\theta_{k+1,m}(Q+q)\big{)}- \mathbb{E}\big{(}V_{k+1}(X_{t_{k+1}},Q+q)|X_{t_{k}}\big{)}\Big{|}^{2}.\] Then for all \(q_{1},q_{2}\in Adm(t_{k},Q)\) define \[q_{A}^{*}=q_{1}\cdot 1_{\{A_{k}^{m}(Q,q_{1})\geq A_{k}^{m}(Q,q_{2})\}}+q_{2} \cdot 1_{\{A_{k}^{m}(Q,q_{1})<A_{k}^{m}(Q,q_{2})\}}.\] Using the definition of the conditional expectation and since activation functions are continuous (assumption \(\boldsymbol{\mathcal{H}_{2}^{\mathcal{NN}}}\)), \(A_{k}^{m}(Q,q)\) is \(\sigma\,(X_{t_{k}})\)-measurable for all \(q\in Adm(t_{k},Q)\). Moreover, \(q_{A}^{*}\in Adm(t_{k},Q)\) and \(A_{k}^{m}(Q,q_{A}^{*})=\max\big{(}A_{k}^{m}(Q,q_{1}),A_{k}^{m}(Q,q_{2})\big{)}\). Thus using the bifurcation property and taking the supremum in (3.10) yields, \[\sup_{Q\in\mathcal{T}_{k}}\big{|}\big{|}V_{k}^{m}(X_{t_{k}},Q)-V_{k}(X_{t_{k}},Q)\big{|}\big{|}_{2}^{2}\leq\sup_{Q\in\mathcal{T}_{k+1}}\big{|}\big{|}\Phi_{m }\big{(}X_{t_{k}};\theta_{k+1,m}(Q)\big{)}-\mathbb{E}\big{(}V_{k+1}(X_{t_{k+ 1}},Q)|X_{t_{k}}\big{)}\big{|}\big{|}_{2}^{2}.\] Using Minkowski inequality and the inequality: \((a+b)^{2}\leq 2(a^{2}+b^{2})\) yields, \[\sup_{Q\in\mathcal{T}_{k}}\big{|}\big{|}V_{k}^{m}(X_{t_{k}},Q)-V_{ k}(X_{t_{k}},Q)\big{|}\big{|}_{2}^{2} \leq 2\sup_{Q\in\mathcal{T}_{k+1}}\big{|}\big{|}\mathbb{E}\big{(}V_{k +1}^{m}(X_{t_{k+1}},Q)|X_{t_{k}}\big{)}-\mathbb{E}\big{(}V_{k+1}(X_{t_{k+1}},Q)| X_{t_{k}}\big{)}\big{|}\big{|}_{2}^{2}\] \[+2\sup_{Q\in\mathcal{T}_{k+1}}\big{|}\big{|}\Phi_{m}\big{(}X_{t_{k} };\theta_{k+1,m}(Q)\big{)}-\mathbb{E}\big{(}V_{k+1}^{m}(X_{t_{k+1}},Q)|X_{t_{k }}\big{)}\big{|}\big{|}_{2}^{2}.\] By the induction assumption, the first term in the right hand side converges to \(0\) as \(m\to+\infty\). Let us consider the second term. Since \(\theta_{k+1,m}(Q)\) solves (2.18), we have \[\sup_{Q\in\mathcal{T}_{k+1}}\bigl{|}\bigl{|}\Phi_{m}\bigl{(}X_{t_{k}};\theta_{k+ 1,m}(Q)\bigr{)}-\mathbb{E}\bigl{(}V_{k+1}^{m}(X_{t_{k+1}},Q)|X_{t_{k}}\bigr{)} \bigr{|}\bigr{|}_{2}^{2}\leq\sup_{Q\in\mathcal{T}_{k+1}}\bigl{|}\bigl{|}\Phi_{m }\bigl{(}X_{t_{k}};\tilde{\theta}_{k+1,m}(Q)\bigr{)}-\mathbb{E}\bigl{(}V_{k+1}^ {m}(X_{t_{k+1}},Q)|X_{t_{k}}\bigr{)}\bigr{|}\bigr{|}_{2}^{2}\] where \(\tilde{\theta}_{k+1,m}(Q)\) solves the "theoretical" optimization problem, \[\inf_{\theta\in\Theta_{m}}\left|\left|V_{k+1}\bigl{(}X_{t_{k+1}},Q\bigr{)}- \Phi_{m}\bigl{(}X_{t_{k}};\theta\bigr{)}\right|\right|_{2}\] with \(\Theta_{m}\) defined in (2.15). Then it follows from Minskorki inequality that \[\sup_{Q\in\mathcal{T}_{k+1}}\bigl{|}\bigl{|}\Phi_{m}\bigl{(}X_{t _{k}};\tilde{\theta}_{k+1,m}(Q)\bigr{)}-\mathbb{E}\bigl{(}V_{k+1}^{m}(X_{t_{k +1}},Q)|X_{t_{k}}\bigr{)}\bigr{|}\bigr{|}_{2}^{2} \leq\sup_{Q\in\mathcal{T}_{k+1}}\bigl{|}\bigl{|}\mathbb{E}\bigl{(}V _{k+1}(X_{t_{k+1}},Q)|X_{t_{k}}\bigr{)}-\mathbb{E}\bigl{(}V_{k+1}^{m}(X_{t_{k +1}},Q)|X_{t_{k}}\bigr{)}\bigr{|}\bigr{|}_{2}^{2}\] \[+\sup_{Q\in\mathcal{T}_{k+1}}\bigl{|}\bigl{|}\Phi_{m}\bigl{(}X_{t _{k}};\tilde{\theta}_{k+1,m}(Q)\bigr{)}-\mathbb{E}\bigl{(}V_{k+1}(X_{t_{k+1}}, Q)|X_{t_{k}}\bigr{)}\bigr{|}\bigr{|}_{2}^{2}.\] Once again, by the induction assumption, the first term in the right hand side converges to \(0\) as \(m\to+\infty\). Moreover, thanks to the universal approximation theorem, for all \(Q\in\mathcal{T}_{k+1}\) \[\bigl{|}\bigl{|}\Phi_{m}\bigl{(}X_{t_{k}};\tilde{\theta}_{k+1,m}(Q)\bigr{)}- \mathbb{E}\bigl{(}V_{k+1}(X_{t_{k+1}},Q)|X_{t_{k}}\bigr{)}\bigr{|}\bigr{|}_{2 }^{2}\xrightarrow[m\to+\infty]{}0. \tag{3.11}\] Besides notice that, \[\bigl{|}\bigl{|}\Phi_{m}\bigl{(}X_{t_{k}};\tilde{\theta}_{k+1,m}(Q)\bigr{)}- \mathbb{E}\bigl{(}V_{k+1}(X_{t_{k+1}},Q)|X_{t_{k}}\bigr{)}\bigr{|}\bigr{|}_{2 }^{2}=\inf_{\Phi\in\mathcal{N}_{m}}\bigl{|}\bigl{|}\Phi\bigl{(}X_{t_{k}}\bigr{)} -\mathbb{E}\bigl{(}V_{k+1}(X_{t_{k+1}},Q)|X_{t_{k}}\bigr{)}\bigr{|}\bigr{|}_{2 }^{2} \tag{3.12}\] where \(\mathcal{N}\mathcal{N}_{m}\) is defined in (2.16). But since the sequence \(\bigl{(}\Theta_{m}\bigr{)}_{m}\) is non-decreasing (in the sense that \(\Theta_{m}\subseteq\Theta_{m+1}\)), then \(\bigl{(}\mathcal{N}\mathcal{N}_{m}\bigr{)}_{m}\) is too. So that by the previous equality (3.12), \[\Bigl{(}\bigl{|}\bigl{|}\Phi_{m}\bigl{(}X_{t_{k}};\tilde{\theta}_{k+1,m}(Q) \bigr{)}-\mathbb{E}\bigl{(}V_{k+1}(X_{t_{k+1}},Q)|X_{t_{k}}\bigr{)}\bigr{|} \bigr{|}_{2}^{2}\Bigr{)}_{m\geq 2}\] is a non-increasing sequence. Thus keeping in mind equation (3.12), if the function, \[H_{k}\colon\bigl{(}\mathcal{N}\mathcal{N}_{m},\;|\cdot|_{\sup} \bigr{)}\times\bigl{(}\mathcal{T}_{k+1},\;|\cdot|\bigr{)} \to\mathbb{R}\] \[(\Phi,Q) \longmapsto||L_{k}(\Phi,Q)||_{2}^{2}:=\bigl{|}\bigl{|}\Phi\bigl{(} X_{t_{k}}\bigr{)}-\mathbb{E}\bigl{(}V_{k+1}(X_{t_{k+1}},Q)|X_{t_{k}}\bigr{)} \bigr{|}\bigr{|}_{2}^{2}\] is continuous, then thanks to Theorem A.2 (noticing that for all \(m\geq 2\), \(\mathcal{N}\mathcal{N}_{m}\) is a compact set), the function \[Q\mapsto\bigl{|}\bigl{|}\Phi_{m}\bigl{(}X_{t_{k}};\tilde{\theta}_{k+1,m}(Q) \bigr{)}-\mathbb{E}\bigl{(}V_{k+1}(X_{t_{k+1}},Q)|X_{t_{k}}\bigr{)}\bigr{|} \bigr{|}_{2}^{2}\] will be continuous on the compact set \(\mathcal{T}_{k+1}\). Thus one may use Dini lemma and conclude that the pointwise convergence in (3.11) is in fact uniform. Which will completes the proof. Note that we have already shown that \(Q\mapsto\mathbb{E}\bigl{(}V_{k+1}(X_{t_{k+1}},Q)|X_{t_{k}}\bigr{)}\) is almost surely continuous under assumption \(\boldsymbol{\mathcal{H}_{3,2q}}\). Moreover using the classic inequality: \((a+b)^{2}\leq 2(a^{2}+b^{2})\) and then conditional Jensen inequality \[\big{|}L_{k}(\Phi,Q)|^{2} \leq 2\cdot\big{|}\Phi\big{(}X_{t_{k}}\big{)}\big{|}^{2}+2\cdot\mathbb{ E}\big{(}V_{k+1}(X_{t_{k+1}},Q)^{2}|X_{t_{k}}\big{)}\] \[\leq 2\cdot\big{|}\Phi\big{(}X_{t_{k}}\big{)}\big{|}^{2}+2\cdot \mathbb{E}\big{(}G_{k+1}^{2}|X_{t_{k}}\big{)}\in\mathbb{L}_{\mathbb{R}}^{1} \big{(}\mathbb{P}\big{)},\] where the existence of \(G_{k+1}\in\mathbb{L}_{\mathbb{R}}^{2}(\mathbb{P})\) (independent of \(Q\)) follows from Remark 2.1 and is implied by assumption \(\mathbf{\mathcal{H}_{3,2q}}\). Note that the integrability of \(\big{|}\Phi\big{(}X_{t_{k}}\big{)}\big{|}^{2}\) follows from assumptions \(\mathbf{\mathcal{H}_{1}^{\mathcal{NN}}}\) and \(\mathbf{\mathcal{H}_{3,2q}}\). This implies that \(||L_{k}(\Phi,\cdot)||_{2}^{2}\) is continuous. Besides, for some sequence \((\Phi_{n})_{n}\) of \(\mathcal{NN}_{m}\) such that \(\Phi_{n}\xrightarrow[n\to+\infty]{\cdot|_{\text{sup}}}\Phi\), it follows from the Lebesgue's dominated convergence theorem (enabled by assumptions \(\mathbf{\mathcal{H}_{1}^{\mathcal{NN}}}\) and \(\mathbf{\mathcal{H}_{3,2q}}\)) that \(||L_{k}(\Phi_{n},Q)||_{2}^{2}\xrightarrow[n\to+\infty]{}||L_{k}(\Phi,Q)||_{2}^ {2}\). Which shows that \(||L_{k}(\cdot,Q)||_{2}^{2}\) is continuous. Therefore the function \(H_{k}\) is continuous. And as already mentioned this completes the proof. **Remark 3.4** (Assumptions \(\mathbf{\mathcal{H}_{1}^{\mathcal{NN}}}\) and \(\mathbf{\mathcal{H}_{2}^{\mathcal{NN}}}\)).: _In the previous proposition, we made the assumption that the neural networks are continuous and with polynomial growth. This assumption is clearly satisfied when using classic activation functions such as the ReLU function \(x\in\mathbb{R}\mapsto\max(x,0)\) and Sigmoid function \(x\in\mathbb{R}\mapsto 1/(1+e^{-x})\)._ ### Convergence of Monte Carlo approximation From now on, we assume a fixed positive integer \(m\) and focus is on the convergence of the value function that arises from the second approximation (2.11) or (2.19). Unlike the preceding section and for technical convenience, we restrict our analysis of the neural network approximation to the bang-bang setting. However, the least squares regression will still be examined in a general context. #### 3.2.1 Least squares regression We establish a convergence result under the following Hilbert assumption. \(\mathbf{\mathcal{H}_{5}^{LS}}\): For all \(k=0,\ldots,n-1\) the sequence \(\left(e_{i}\left(X_{t_{k}}\right)\right)_{i\geq 1}\) is a Hilbert basis of \(\mathbb{L}^{2}\big{(}\sigma(X_{t_{k}})\big{)}\). It is worth noting that this assumption is a special case of assumptions \(\mathbf{\mathcal{H}_{1}^{LS}}\) and \(\mathbf{\mathcal{H}_{2}^{LS}}\) with an orthonormality assumption on \(e^{m}\big{(}X_{t_{k}}\big{)}\). Furthermore, in the field of mathematical finance, the underlying asset's diffusion is often assumed to have a Gaussian structure. However, it is well known that the normalized Hermite polynomials \(\big{\{}\frac{H_{k}(x)}{\sqrt{k!}},k\geq 0\big{\}}\) serve as a Hilbert basis for \(\mathbb{L}^{2}(\mathbb{R},\mu)\), the space of square-integrable functions with respect to the Gaussian measure \(\mu\). The Hermite polynomials \(\big{\{}H_{k}(x),k\geq 0\big{\}}\) are defined as follows: \[H_{k}(x)=(-1)^{k}e^{x^{2}}\frac{d^{k}}{dx^{k}}\big{[}e^{-x^{2}}\big{]},\] or recursively by \[H_{k+1}(x)=2x\cdot H_{k}(x)-2k\cdot H_{k-1}(x)\quad\text{with}\quad H_{0}(x)=1,\ \ H_{1}(x)=2x.\] For a multidimensional setting, Hermite polynomials are obtained as the product of one-dimensional Hermite polynomials. Finally, note that assumptions \(\mathbf{\mathcal{H}_{5}^{LS}}\) entail that \(A_{m}^{k}=A_{m,N}^{k}=I_{m}\). The main result of this section aim at proving that the second approximation \(V_{k}^{m,N}\) of the swing value function converges towards the first approximation \(V_{k}^{m}\) as the Monte Carlo sample size \(N\) increases to \(+\infty\) and with a rate of convergence of order \(\mathcal{O}\big{(}\frac{1}{\sqrt{N}}\big{)}\). To achieve this we rely on the following lemma which concern general Monte Carlo rate of convergence. **Lemma 3.5** (Monte Carlo \(\mathbb{L}^{r}\big{(}\mathbb{P}\big{)}\)-rate of convergence).: _Consider \(X_{1},\ldots,X_{N}\) independent and identically distributed random variables with order \(p\) (\(p\geq 2\)) finite moment (with \(\mu=\mathbb{E}(X_{1})\)). Then, there exists a positive constant \(B_{p}\) (only depending on the order \(p\)) such that_ \[\Big{|}\Big{|}\frac{1}{N}\sum_{i=1}^{N}X_{i}-\mu\Big{|}\Big{|}_{p}\leq B_{p} \frac{2^{\frac{p-1}{p}}\left(\mathbb{E}(|X|^{p})+|\mu|^{p}\right)^{\frac{1}{p} }}{\sqrt{N}}.\] Proof.: It follows from Marcinkiewicz-Zygmund inequality that there exists a positive constant \(A_{p}\) (only depends on \(p\)) such that \[\Big{|}\Big{|}\frac{1}{N}\sum_{i=1}^{N}X_{i}-\mu\Big{|}\Big{|}_{p }^{p}=\mathbb{E}\left(\Big{(}\sum_{i=1}^{N}\frac{X_{i}-\mu}{N}\Big{)}^{p} \right)\leq A_{p}\cdot\mathbb{E}\left(\Big{(}\frac{1}{N^{2}}\sum_{i=1}^{N}(X_ {i}-\mu)^{2}\Big{)}^{p/2}\right)\] \[=\frac{A_{p}}{N^{\frac{p}{2}}}\cdot\mathbb{E}\left(\Big{(}\frac{ 1}{N}\sum_{i=1}^{N}(X_{i}-\mu)^{2}\Big{)}^{p/2}\right).\] Using the convexity of the function \(x\in\mathbb{R}_{+}\mapsto x^{p/2}\) yields, \[\Big{(}\frac{1}{N}\sum_{i=1}^{N}(X_{i}-\mu)^{2}\Big{)}^{p/2}\leq\frac{1}{N} \sum_{i=1}^{N}(X_{i}-\mu)^{p}.\] Thus taking the expectation and using the inequality, \((a+b)^{p}\leq 2^{p-1}(a^{p}+b^{p})\) yields, \[\left|\left|\frac{1}{N}\sum_{i=1}^{N}X_{i}-\mu\right|\right|_{p}^{p}\leq\frac{ A_{p}}{N^{\frac{p}{2}}}\cdot\mathbb{E}\Big{(}(X-\mu)^{p}\Big{)}\leq A_{p} \cdot\frac{2^{p-1}\Big{(}\mathbb{E}(|X|^{p})+|\mu|^{p}\Big{)}}{N^{\frac{p}{2} }}.\] This completes the proof. In the following proposition, we show that using Hilbert basis as a regression basis allows to achieve a convergence with a rate of order \(\mathcal{O}(\frac{1}{\sqrt{N}})\). **Proposition 3.6**.: _Under assumptions \(\boldsymbol{\mathcal{H}_{3,\infty}}\), \(\boldsymbol{\mathcal{H}_{4,\infty}^{LS}}\) and \(\boldsymbol{\mathcal{H}_{5}^{LS}}\), for all \(k=0,\ldots,n-1\) and for any \(s>1\), we have_ \[\sup_{Q\;\in\;\mathcal{T}_{k}}\;\Big{|}\Big{|}V_{k}^{m,N}\left(X_{t_{k}},Q \right)-V_{k}^{m}\left(X_{t_{k}},Q\right)\Big{|}\Big{|}_{s}=\mathcal{O}\left( \frac{1}{\sqrt{N}}\right)\quad\text{ as }\,N\to+\infty.\] Proof.: We prove this proposition using a backward induction on \(k\). Since \(V_{n-1}^{m,N}\left(X_{t_{n-1}},\cdot\right)=V_{n-1}^{m}\left(X_{t_{n-1}},\cdot\right)\) on \(\mathcal{T}_{n-1}\), then the proposition holds for \(k=n-1\). Assume now that the proposition holds for \(k+1\). Using the inequality, \(|\sup_{i\in I}a_{i}-\sup_{i\in I}\,b_{i}|\;\leq\;\sup_{i\in I}\,|a_{i}-b_{i}|\) and then Cauchy-Schwartz' one, we get, \[\big{|}V_{k}^{m,N}\left(X_{t_{k}},Q\right)-V_{k}^{m}(X_{t_{k}},Q )\big{|} \leq\operatorname*{ess\,sup}_{q\in Adm\left(t_{k},Q\right)}\big{|} \langle\theta_{k+1,m,N}(Q+q)-\theta_{k+1,m}(Q+q),e^{m}(X_{t_{k}})\rangle\big{|}\] \[\leq\big{|}e^{m}(X_{t_{k}})\big{|}\cdot\operatorname*{ess\,sup}_{q \in Adm\left(t_{k},Q\right)}\big{|}\theta_{k+1,m,N}(Q+q)-\theta_{k+1,m}(Q+q) \big{|}\] \[\leq\big{|}e^{m}(X_{t_{k}})\big{|}\cdot\operatorname*{ess\,sup}_{q \in\,\mathcal{U}_{k}(Q)}\big{|}\theta_{k+1,m,N}(Q+q)-\theta_{k+1,m}(Q+q)\big{|},\] where \(\mathcal{U}_{k}(Q)\) is the set of all \(\mathcal{F}_{t_{k+1}}^{X}\)-measurable random variables lying within \(\mathcal{I}_{k+1}\big{(}Q\big{)}\) (see (1.8)). The last inequality is due to the fact that \(\mathcal{F}_{t_{k}}^{X}\subset\mathcal{F}_{t_{k+1}}^{X}\). Then for some constants \(b,c>1\) such that \(\frac{1}{b}+\frac{1}{c}=1\), it follows from Holder inequality that, \[\Big{|}\Big{|}V_{k}^{m,N}(X_{t_{k}},Q)-V_{k}^{m}(X_{t_{k}},Q) \Big{|}\Big{|}_{s}\leq\Big{|}\Big{|}\big{|}e^{m}(X_{t_{k}})\big{|}\Big{|}\Big{|} _{sb}\cdot\Big{|}\Big{|}\underset{q\in\mathcal{U}_{k}(Q)}{\operatorname{ess \,sup}}\,\,\big{|}\theta_{k+1,m,N}(Q+q)-\theta_{k+1,m}(Q+q)\big{|}\Big{|}\Big{|} \Big{|}_{sc}. \tag{3.13}\] To interchange the expectation and the essential supremum, we rely on the bifurcation property. Let \(q_{1},q_{2}\in\mathcal{U}_{k}(Q)\) and denote by \[q^{*}=q_{1}\cdot 1_{\{B_{k}(Q,q_{1})\geq B_{k}(Q,q_{2})\}}+q_{2}\cdot 1_{\{B_{ k}(Q,q_{1})<B_{k}(Q,q_{2})\}}\] where \(B_{k}(Q,q_{i})=\big{|}\theta_{k+1,m,N}(Q+q_{i})-\theta_{k+1,m}(Q+q_{i})\big{|} ^{sc}\) for \(i\in\{1,2\}\). One can easily check that for all \(i\in\{1,2\}\), \(B_{k}(Q,q_{i})\) is \(\mathcal{F}_{t_{k+1}}^{X}\)-measurable so that \(q^{*}\in\mathcal{U}_{k}(Q)\). We also have \(B_{k}(Q,q^{*})=\max\left(B_{k}(Q,q_{1}),B_{k}(Q,q_{2})\right)\). Thus one may use the bifurcation property in (3.13), we get, \[\Big{|}\Big{|}V_{k}^{m,N}(X_{t_{k}},Q)-V_{k}^{m}(X_{t_{k}},Q) \Big{|}\Big{|}_{s} \leq\Big{|}\Big{|}\big{|}e^{m}(X_{t_{k}})\big{|}\Big{|}\Big{|}_{sb }\cdot\underset{q\in\mathcal{U}_{k}(Q)}{\sup}\,\,\Big{|}\Big{|}\big{|}\theta_ {k+1,m,N}(Q+q)-\theta_{k+1,m}(Q+q)\big{|}\Big{|}\Big{|}_{sc}\] \[\leq\Big{|}\Big{|}e^{m}(X_{t_{k}})\big{|}\Big{|}\Big{|}_{sb}\cdot \underset{Q\in\mathcal{T}_{k+1}}{\sup}\,\,\Big{|}\Big{|}\big{|}\theta_{k+1,m,N }(Q)-\theta_{k+1,m}(Q)\big{|}\Big{|}\Big{|}_{sc}. \tag{3.14}\] But for any \(Q\in\mathcal{T}_{k+1}\), it follows from Minkowski's inequality that, \[\Big{|}\Big{|}\big{|}\theta_{k+1,m,N}(Q)-\theta_{k+1,m}(Q) \big{|}\Big{|}\Big{|}_{sc} =\left|\Bigg{|}\frac{1}{N}\sum_{p=1}^{N}e^{m}(X_{t_{k}}^{[p]}) \cdot V_{k+1}^{m,N}(X_{t_{k+1}}^{[p]},Q)-\mathbb{E}\big{(}e^{m}(X_{t_{k}})V_{k +1}^{m}(X_{t_{k+1}},Q)\big{)}\big{|}\Bigg{|}\right|_{sc}\] \[\leq\left|\Bigg{|}\frac{1}{N}\sum_{p=1}^{N}e^{m}(X_{t_{k}}^{[p]}) \cdot\Big{(}V_{k+1}^{m,N}(X_{t_{k+1}}^{[p]},Q)-V_{k+1}^{m}(X_{t_{k+1}}^{[p]}, Q)\Big{)}\,\Bigg{|}\Bigg{|}\right|_{sc}\] \[+\left|\Bigg{|}\frac{1}{N}\sum_{p=1}^{N}e^{m}(X_{t_{k}}^{[p]})V_{k +1}^{m}(X_{t_{k+1}}^{[p]},Q)-\mathbb{E}\left(e^{m}(X_{t_{k}})V_{k+1}^{m}(X_{t_ {k+1}},Q)\right)\Bigg{|}\Bigg{|}\right|_{sc},\] where the last inequality comes from the fact that, for all \(p\geq 1\), \(\big{(}X_{t_{k}}^{[p]},X_{t_{k+1}}^{[p]}\big{)}\) has the same distribution with \(\big{(}X_{t_{k}},X_{t_{k+1}}\big{)}\). Therefore, for some constants \(u,v>1\) such that \(\frac{1}{u}+\frac{1}{v}=1\), it follows from Holder inequality, \[\Big{|}\Big{|}\big{|}\theta_{k+1,m,N}(Q)-\theta_{k+1,m}(Q)\big{|} \Big{|}\Big{|}_{sc} \leq\Big{|}\Big{|}\big{|}e^{m}(X_{t_{k}})\big{|}\Big{|}\Big{|}_{ scu}\cdot\Big{|}\Big{|}V_{k+1}^{m,N}(X_{t_{k+1}},Q)-V_{k+1}^{m}(X_{t_{k+1}},Q) \Big{|}\Big{|}\Big{|}_{scv}\] \[+\left|\Bigg{|}\frac{1}{N}\sum_{p=1}^{N}e^{m}(X_{t_{k}}^{[p]})V_{k +1}^{m}(X_{t_{k+1}}^{[p]},Q)-\mathbb{E}\left(e^{m}(X_{t_{k}})V_{k+1}^{m}(X_{t_ {k+1}},Q)\right)\Big{|}\Bigg{|}\right|_{sc}.\] Taking the supremum in the previous inequality and plugging it into equation (3.14) yields, \[\sup_{Q\in\mathcal{T}_{k}}\left|\left|V_{k}^{m,N}(X_{t_{k}},Q)-V_{k}^ {m}(X_{t_{k}},Q)\right|\right|_{s}\] \[\leq\left|\left|\left|e^{m}(X_{t_{k}})\right|\right|\right|_{sb} \cdot\left|\left|\left|e^{m}(X_{t_{k}})\right|\right|\right|_{scu}\cdot\sup_{Q \in\mathcal{T}_{k+1}}\left|\left|V_{k+1}^{m,N}(X_{t_{k+1}},Q)-V_{k+1}^{m}(X_{t _{k+1}},Q)\right|\right|_{scv}\] \[+\left|\left|\left|e^{m}(X_{t_{k}})\right|\right|\right|_{sb} \cdot\sup_{Q\in\mathcal{T}_{k+1}}\left|\left|\left|\frac{1}{N}\sum_{p=1}^{N}e^{ m}(X_{t_{k}}^{[p]})V_{k+1}^{m}(X_{t_{k+1}}^{[p]},Q)-\mathbb{E}\left(e^{m}(X_{t_{k}} )V_{k+1}^{m}(X_{t_{k+1}},Q)\right)\right|\right|_{sc}.\] Under assumption \(\boldsymbol{\mathcal{H}_{4,r}^{LS}}\) and using induction assumption, the first term in the sum of the right hand side converges to \(0\) as \(N\to+\infty\) with a rate of order \(\mathcal{O}(\frac{1}{\sqrt{N}})\). Once again, by assumption \(\boldsymbol{\mathcal{H}_{4,\infty}^{LS}}\), it remains to prove that it is also the case for the second term. But we have, \[C_{N}(Q) :=\left|\left|\left|\frac{1}{N}\sum_{p=1}^{N}e^{m}(X_{t_{k}}^{[p] })V_{k+1}^{m}(X_{t_{k+1}}^{[p]},Q)-\mathbb{E}\left(e^{m}(X_{t_{k}})V_{k+1}^{m }(X_{t_{k+1}},Q)\right)\right|\right|_{sc}\] \[=\left|\left|\sum_{j=1}^{m}\left(\frac{1}{N}\sum_{p=1}^{N}e_{j}(X _{t_{k}}^{[p]})V_{k+1}^{m}(X_{t_{k+1}}^{[p]},Q)-\mathbb{E}\left(e_{j}(X_{t_{k} })V_{k+1}^{m}(X_{t_{k+1}},Q)\right)\right)^{2}\right|\right|_{\frac{sc}{2}}^{ 1}\] \[\leq\sum_{j=1}^{m}\left|\left|\frac{1}{N}\sum_{p=1}^{N}e_{j}(X_{t _{k}}^{[p]})V_{k+1}^{m}(X_{t_{k+1}}^{[p]},Q)-\mathbb{E}\big{(}e_{j}(X_{t_{k}} )V_{k+1}^{m}(X_{t_{k+1}},Q)\big{)}\right|\right|_{sc}\] \[\leq\frac{A_{ac}}{\sqrt{N}}\cdot\sum_{j=1}^{m}\Big{\{}\mathbb{E} \big{(}|e_{j}(X_{t_{k}})V_{k+1}^{m}(X_{t_{k+1}},Q)|^{sc}\big{)}+\Big{|} \mathbb{E}\big{(}e_{j}(X_{t_{k}})V_{k+1}^{m}(X_{t_{k+1}},Q)\big{)}\Big{|}^{sc }\Big{\}},\] where the second-last inequality comes from Minkowski inequality and the inequality, \(\sqrt{x+y}\leq\sqrt{x}+\sqrt{y}\) for all \(x,y\geq 0\). The last inequality is obtained using Lemma 3.5 (with a positive constant \(A_{ac}\) only depends on the order \(a\) and \(c\)). But using the continuity (which holds as noticed in Remark 2.1) of both functions \(Q\mapsto\mathbb{E}\Big{(}\big{|}e_{j}(X_{t_{k}})V_{k+1}^{m}(X_{t_{k+1}},Q) \big{|}^{sc}\Big{)}\) and \(Q\mapsto\big{|}\mathbb{E}(e_{j}(X_{t_{k}})V_{k+1}^{m}(X_{t_{k+1}},Q))\big{|}^{ sc}\) on the compact set \(\mathcal{T}_{k+1}\) one may deduce that, as \(N\to+\infty\), \[\sup_{Q\in\mathcal{T}_{k+1}}C_{N}(Q)=\mathcal{O}\left(\frac{1}{\sqrt{N}} \right).\] This completes the proof. **Remark 3.7** (Almost surely convergence).: _It is worth noting that it is difficult to obtain an almost surely convergence result without further assumptions (for example boundedness assumption) of the regression functions. The preceding proposition is widely based on Holder inequality emphasizing on why we have chosen the \(\mathbb{L}^{s}\big{(}\mathbb{P}\big{)}\)-norm. However, in the neural network analysis that follows, we prove an almost surely convergence result._ #### 3.2.2 Neural network approximation We consider the discrete setting with integer volume constraints with a state of attainable cumulative consumptions given by (1.9). Results in this section will be mainly based on Lemmas 3.8 and 3.9 stated below. Let \((f_{n})_{n}\) be a sequence of real functions defined on a compact set \(K\subset\mathbb{R}^{d}\). Define, \[v_{n}=\inf_{x\in K}\,f_{n}(x)\quad\text{ and }\quad x_{n}\in\operatorname*{arg\,inf}_{x \in K}f_{n}(x).\] Then, we have the following two Lemmas. **Lemma 3.8** (Convergence of minimizers).: _Assume that the sequence \((f_{n})_{n}\) converges uniformly on \(K\) to a continuous function \(f\). Let \(v^{*}=\inf_{x\in K}\,f_{n}(x)\) and \(\mathcal{S}^{*}=\operatorname*{arg\,inf}_{x\in K}f(x)\). Then \(v_{n}\to v^{*}\) and the distance \(d(x_{n},\mathcal{S}^{*})\) between the minimizer \(x_{n}\) and the set \(\mathcal{S}^{*}\) converges to \(0\) as \(n\to+\infty\)._ **Lemma 3.9** (Uniform law of large numbers).: _Let \((\xi_{i})_{i\geq 1}\) be a sequence of i.i.d. \(\mathbb{R}^{m}\)-valued random vectors and \(h:\mathbb{R}^{d}\times\mathbb{R}^{m}\to\mathbb{R}\) a measurable function. Assume that,_ * _a.s.,_ \(\theta\in\mathbb{R}^{d}\mapsto h(\theta,\xi_{1})\) _is continuous,_ * _For all_ \(C>0\)_,_ \(\mathbb{E}\Big{(}\sup_{|\theta|\leq C}\,\big{|}h(\theta,\xi_{1})\big{|}\Big{)} <+\infty\)_._ _Then, a.s. \(\theta\in\mathbb{R}^{d}\mapsto\frac{1}{N}\sum_{i=1}^{N}h(\theta,\xi_{i})\) converges locally uniformly to the continuous function \(\theta\in\mathbb{R}^{d}\mapsto\mathbb{E}\big{(}h(\theta,\xi_{1})\big{)}\), i.e._ \[\lim_{N\to+\infty}\,\sup_{|\theta|\leq C}\,\Big{|}\frac{1}{N}\sum_{i=1}^{N}h( \theta,\xi_{i})-\mathbb{E}\big{(}h(\theta,\xi_{1})\big{)}\Big{|}=0\,\text{ a.s.}\] Combining the two preceding lemmas is the main tool to analyze the Monte Carlo convergence of the neural network approximation. The result is stated below and requires the following (additional) assumption. \(\boldsymbol{\mathcal{H}_{3}^{\mathcal{NN}}}\): For any \(m\geq 2\), \(0\leq k\leq n-1\), \(Q\in\mathcal{T}_{k}\) and \(\theta^{1},\theta^{2}\in\mathcal{S}_{k}^{m}(Q)\) (defined in (2.18)), \(\Phi_{m}(\cdot;\theta^{1})=\Phi_{m}(\cdot;\theta^{2})\). This assumption just states that, almost surely, two minimizers bring the same value. Before showing the main result of this section, it is worth noting this important remark. **Remark 3.10**.: * _Under assumptions_ \(\boldsymbol{\mathcal{H}_{1}^{\mathcal{NN}}}\) _and_ \(\boldsymbol{\mathcal{H}_{3,q}}\) _and using a straightforward backward induction in equation (_2.17_), it can be shown that there exists a random variable_ \(G_{k}\in\mathbb{L}_{\mathbb{R}^{d}}^{q}\big{(}\mathbb{P}\big{)}\) _(independent of_ \(Q\)_) such that_ \(\big{|}V_{k}^{m}(X_{t_{k}},Q)\big{|}\leq G_{k}\) _for any_ \(Q\in\mathcal{T}_{k}\)_; where_ \(V_{k}^{m}\) _is defined in (_2.17_)._ * _Under assumption_ \(\boldsymbol{\mathcal{H}_{1}^{\mathcal{NN}}}\)_, there exists a positive constant_ \(\kappa_{m}\) _such that, for any_ \(0\leq k\leq n-1\) _and any_ \(Q\in\mathcal{T}_{k}\)_,_ \[\max\Big{(}\big{|}V_{k}^{m}(X_{t_{k}},Q)\big{|},\big{|}V_{k}^{m,N}(X_{t_{k}},Q )\big{|}\Big{)}\leq q_{\max}\cdot\big{|}S_{t_{k}}-K\big{|}+\kappa_{m}\cdot \big{(}1+\big{|}X_{t_{k}}\big{|}^{q}\big{)}.\] _If in addition, assumption_ \(\boldsymbol{\mathcal{H}_{3,q}}\) _holds true, then the right hand side of the last inequality is an integrable random variable._ We now state our result of interest. **Proposition 3.11**.: _Let \(m\geq 2\). Under assumptions \(\boldsymbol{\mathcal{H}_{1}^{\mathcal{NN}}}\), \(\boldsymbol{\mathcal{H}_{2}^{\mathcal{NN}}}\), \(\boldsymbol{\mathcal{H}_{3}^{\mathcal{NN}}}\) and \(\boldsymbol{\mathcal{H}_{3,2q}}\), for any \(0\leq k\leq n-1\), we have,_ \[\lim_{N\to+\infty}\,\sup_{Q\,\in\,\mathcal{T}_{k}}\,\,\,\Big{|}V_{k}^{m,N}\,(X _{t_{k}},Q)-V_{k}^{m}\,(X_{t_{k}},Q)\Big{|}=0\quad\text{ a.s.}\] Note that in \(\mathbf{\mathcal{H}_{3,2q}}\), parameters \(q\) are that involved in assumption \(\mathbf{\mathcal{H}_{1}^{\mathcal{N}\mathcal{N}}}\). Recall that, the set \(\mathcal{T}_{k}\) is the one of the discrete setting as discussed in (1.9). Proof.: We proceed by a backward induction on \(k\). The proposition clearly holds true for \(k=n-1\) since, almost surely, \(V_{n-1}^{m,N}(X_{t_{n-1}},\cdot)=V_{n-1}^{m}(X_{t_{n-1}},\cdot)\) on \(\mathcal{T}_{n-1}\). Assume now the proposition holds true for \(k+1\). Let \(Q\in\mathcal{T}_{k}\). Using the inequality, and then triangle inequality, we get, \[\Big{|}V_{k}^{m,N}\left(X_{t_{k}},Q\right)-V_{k}^{m}\left(X_{t_{k}},Q\right)\Big{|} \leq\operatorname*{ess\,sup}_{q\in Adm\left(t_{k},Q\right)}\ \Big{|}\Phi_{m}\big{(}X_{t_{k}};\theta_{k,m,N}(Q+q)\big{)}-\Phi_{m}\big{(}X_{ t_{k}};\widetilde{\theta}_{k,m,N}(Q+q)\big{)}\Big{|}\] \[+\operatorname*{ess\,sup}_{q\in Adm\left(t_{k},Q\right)}\ \Big{|}\Phi_{m}\big{(}X_{t_{k}};\widetilde{\theta}_{k,m,N}(Q+q)\big{)}-\Phi_{ m}\big{(}X_{t_{k}};\theta_{k,m}(Q+q)\big{)}\Big{|}, \tag{3.15}\] where \(\widetilde{\theta}_{k,m,N}(Q)\) lies within the following set, \[\operatorname*{arg\,inf}_{\theta\in\Theta_{m}}\ \frac{1}{N}\sum_{p=1}^{N}\Big{|}V_{k+1}^{ m}\big{(}X_{t_{k+1}}^{[p]},Q\big{)}-\Phi_{m}\big{(}X_{t_{k}}^{[p]},\theta \big{)}\Big{|}^{2}. \tag{3.16}\] Then taking the supremum in (3.15) and using triangle inequality, we get, \[\sup_{Q\in\mathcal{T}_{k}}\Big{|}V_{k}^{m,N}\left(X_{t_{k}},Q \right)-V_{k}^{m}\left(X_{t_{k}},Q\right)\Big{|} \leq\sup_{Q\in\mathcal{T}_{k+1}}\Big{|}\Phi_{m}\big{(}X_{t_{k}}; \theta_{k,m,N}(Q)\big{)}-\Phi_{m}\big{(}X_{t_{k}};\theta_{k,m}(Q)\big{)}\Big{|}\] \[+2\cdot\sup_{Q\in\mathcal{T}_{k+1}}\Big{|}\Phi_{m}\big{(}X_{t_{k}} ;\widetilde{\theta}_{k,m,N}(Q)\big{)}-\Phi_{m}\big{(}X_{t_{k}};\theta_{k,m}(Q) \big{)}\Big{|}. \tag{3.17}\] We will handle the right hand side of the last inequality term by term. Let us start with the second term. Note that owing to assumption \(\mathbf{\mathcal{H}_{2}^{\mathcal{N}\mathcal{N}}}\), the function \[\theta\in\Theta_{m}\mapsto V_{k+1}^{m}\big{(}X_{t_{k+1}},Q\big{)}-\Phi_{m} \big{(}X_{t_{k}};\theta\big{)}\] is almost surely continuous. Moreover, for any \(C>0\), using the inequality \((a+b)^{2}\leq 2(a^{2}+b^{2})\) and assumption \(\mathbf{\mathcal{H}_{1}^{\mathcal{N}\mathcal{N}}}\), there exists a positive constant \(\kappa_{m}\) such that for any \(Q\in\mathcal{T}_{k+1}\), \[\mathbb{E}\left(\sup_{|\theta|\leq C}\ \Big{|}V_{k+1}^{m}\big{(}X_{t_{k +1}},Q\big{)}-\Phi_{m}\big{(}X_{t_{k}};\theta\big{)}\Big{|}^{2}\right) \leq 2\cdot\mathbb{E}\Big{(}\big{|}V_{k+1}^{m}\big{(}X_{t_{k+1}},Q \big{)}\big{|}^{2}\Big{)}+2\cdot\sup_{|\theta|\leq C}\ \mathbb{E}\Big{(}\big{|}\Phi_{m}\big{(}X_{t_{k}};\theta\big{)}\big{|}^{2} \Big{)}\] \[\leq 2\cdot\mathbb{E}\Big{(}\big{|}V_{k+1}^{m}\big{(}X_{t_{k+1}},Q\big{)}\big{|}^{2}\Big{)}+2\kappa_{m}\Big{(}1+\mathbb{E}\big{|}X_{t_{k}}\big{|} ^{2q}\Big{)}\] and the right hand side of the last inequality is finite under assumption \(\mathbf{\mathcal{H}_{3,2q}}\), keeping in mind point (A) of Remark 3.10. Thus thanks to Lemma 3.9, almost surely, we have the uniform convergence on \(\Theta_{m}\), \[\lim_{N\to+\infty}\ \sup_{\theta\in\Theta_{m}}\ \Bigg{|}\frac{1}{N}\sum_{p=1}^{N} \Big{|}V_{k+1}^{m}\big{(}X_{t_{k+1}}^{[p]},Q\big{)}-\Phi_{m}\big{(}X_{t_{k}}^{[p] };\theta\big{)}\Big{|}^{2}-\Big{|}\Big{|}V_{k+1}^{m}\big{(}X_{t_{k+1}},Q\big{)} -\Phi_{m}\big{(}X_{t_{k}};\theta\big{)}\Big{|}^{2}_{2}\Bigg{|}=0. \tag{3.18}\] Thus, for any \(Q\in\mathcal{T}_{k+1}\), Lemma 3.8 implies that \(\lim_{N\to+\infty}\ d\big{(}\widetilde{\theta}_{k,m,N}(Q),\mathcal{S}_{k}^{m}(Q )\big{)}=0\). We restrict ourselves to a subset with probability one of the original probability space on which this convergence holds and the random functions \(\Phi_{m}\big{(}X_{t_{k}};\cdot\big{)}\) are uniformly continuous (see assumption \(\mathbf{\mathcal{H}_{2}^{\mathcal{N}\mathcal{N}}}\)). Then, there exists a sequence \(\big{(}\alpha_{k,m,N}(Q)\big{)}_{N}\) lying within \(\mathcal{S}_{k}^{m}(Q)\) such that, \[\lim_{N\rightarrow+\infty}\,\left|\widetilde{\theta}_{k,m,N}(Q)-\alpha_{k,m,N}(Q) \right|=0.\] Thus, the uniform continuity of functions \(\Phi_{m}\big{(}X_{t_{k}};\cdot\big{)}\) combined with assumption \(\mathbf{\mathcal{H}_{3}^{\mathcal{N}\mathcal{N}}}\) yield, \[\Big{|}\Phi_{m}\big{(}X_{t_{k}};\widetilde{\theta}_{k,m,N}(Q)\big{)}-\Phi_{m} \big{(}X_{t_{k}};\theta_{k,m}(Q)\big{)}\Big{|}=\Big{|}\Phi_{m}\big{(}X_{t_{k}}; \widetilde{\theta}_{k,m,N}(Q)\big{)}-\Phi_{m}\big{(}X_{t_{k}};\alpha_{k,m,N}( Q)\big{)}\Big{|}\xrightarrow[N\rightarrow+\infty]{}0.\] Furthermore, since the set \(\mathcal{T}_{k+1}\) has a finite cardinal (discrete setting) then, we have \[\lim_{N\rightarrow+\infty}\,\sup_{Q\in\mathcal{T}_{k+1}}\,\Big{|}\Phi_{m} \big{(}X_{t_{k}};\widetilde{\theta}_{k,m,N}(Q)\big{)}-\Phi_{m}\big{(}X_{t_{k} };\theta_{k,m}(Q)\big{)}\Big{|}=0. \tag{3.19}\] It remains to handle the first term in the right hand side of inequality (3.17). Note that, if the following uniform convergence, \[\lim_{N\rightarrow+\infty}\,\sup_{\theta\in\Theta_{m}}\,\underbrace{\left| \frac{1}{N}\sum_{p=1}^{N}\Big{|}V_{k+1}^{m,N}\big{(}X_{t_{k+1}}^{[p]},Q\big{)} -\Phi_{m}\big{(}X_{t_{k}}^{[p]};\theta\big{)}\right|^{2}-\frac{1}{N}\sum_{p=1} ^{N}\Big{|}V_{k+1}^{m}\big{(}X_{t_{k+1}}^{[p]},Q\big{)}-\Phi_{m}\big{(}X_{t_{k }}^{[p]};\theta\big{)}\Big{|}^{2}\right|}_{:=\big{|}\Delta_{k,m,N}^{Q}(\theta) \big{|}}=0 \tag{3.20}\] holds true, then the latter uniform convergence will entail the following one owing to the uniform convergence (3.18), \[\lim_{N\rightarrow+\infty}\,\sup_{\theta\in\Theta_{m}}\,\left|\frac{1}{N}\sum _{p=1}^{N}\Big{|}V_{k+1}^{m,N}\big{(}X_{t_{k+1}}^{[p]},Q\big{)}-\Phi_{m}\big{(} X_{t_{k}}^{[p]};\theta\big{)}\right|^{2}-\Big{|}\Big{|}V_{k+1}^{m}\big{(}X_{t_{k +1}},Q\big{)}-\Phi_{m}\big{(}X_{t_{k}};\theta\big{)}\Big{|}\Big{|}^{2}_{2} \right|=0 \tag{3.21}\] and the desired result follows. To achieve this, we start by proving the uniform convergence (3.20). Then we show how its implication (3.21) entails the desired result. Using triangle inequality and the elementary identity, \(a^{2}-b^{2}=(a-b)(a+b)\), we have, \[\big{|}\Delta_{k,m,N}^{Q}(\theta)\big{|} \leq\frac{1}{N}\sum_{p=1}^{N}\Big{|}V_{k+1}^{m,N}\big{(}X_{t_{k+1 }}^{[p]},Q\big{)}+V_{k+1}^{m}\big{(}X_{t_{k+1}}^{[p]},Q\big{)}-2\cdot\Phi_{m} \big{(}X_{t_{k}}^{[p]};\theta\big{)}\Big{|}\cdot\Big{|}V_{k+1}^{m,N}\big{(}X_ {t_{k+1}}^{[p]},Q\big{)}-V_{k+1}^{m}\big{(}X_{t_{k+1}}^{[p]},Q\big{)}\Big{|}\] \[\leq\frac{2}{N}\sum_{p=1}^{N}\Big{(}q_{\max}\big{|}S_{t_{k+1}}^{[ p]}-K\big{|}+\kappa_{m}\big{(}1+|X_{t_{k+1}}^{[p]}|^{q}\big{)}+\kappa_{m} \big{(}1+|X_{t_{k}}^{[p]}|^{q}\big{)}\Big{)}\cdot\Big{|}V_{k+1}^{m,N}\big{(}X_ {t_{k+1}}^{[p]},Q\big{)}-V_{k+1}^{m}\big{(}X_{t_{k+1}}^{[p]},Q\big{)}\Big{|},\] where in the last inequality we used assumption \(\mathbf{\mathcal{H}_{1}^{\mathcal{N}\mathcal{N}}}\) and the point (B) of Remark 3.10. Let \(\varepsilon>0\). Then using the induction assumption and the law of large numbers, we get, \[\limsup_{N}\,\sup_{\theta\in\Theta_{m}}\,\big{|}\Delta_{k,m,N}^{Q}(\theta) \big{|}\leq 2\varepsilon\cdot\mathbb{E}\Big{(}q_{\max}\big{|}S_{t_{k+1}}-K \big{|}+\kappa_{m}\big{(}1+|X_{t_{k+1}}|^{q}\big{)}+\kappa_{m}\big{(}1+|X_{t _{k}}|^{q}\big{)}\Big{)}.\] Hence letting \(\varepsilon\to 0\) entails the result (3.20). Therefore, as already mentioned, the result (3.21) also holds true. Thus, using Lemma 3.8, we get that \(\lim_{N\rightarrow+\infty}\,d\big{(}\theta_{k,m,N}(Q),\mathcal{S}_{k}^{m}(Q) \big{)}=0\). We restrict ourselves to a subset with probability one of the original probability space on which this convergence holds and the random functions \(\Phi_{m}\big{(}X_{t_{k}};\cdot\big{)}\) are uniformly continuous (see assumption \(\mathbf{\mathcal{H}_{2}^{\mathcal{N}\mathcal{N}}}\)). Whence, for any \(Q\in\mathcal{T}_{k+1}\), there exists a sequence \(\big{(}\beta_{k,m,N}(Q)\big{)}_{N}\) lying within \(\mathcal{S}_{k}^{m}(Q)\) such that, \[\lim_{N\to+\infty}\ \big{|}\theta_{k,m,N}(Q)-\beta_{k,m,N}(Q)\big{|}=0.\] Thus, the uniform continuity of functions \(\Phi_{m}\big{(}X_{t_{k}};\cdot\big{)}\) combined with assumption \(\mathbf{\mathcal{H}_{3}^{\mathcal{N}\mathcal{N}}}\) yield, \[\Big{|}\Phi_{m}\big{(}X_{t_{k}};\theta_{k,m,N}(Q)\big{)}-\Phi_{m}\big{(}X_{t_{ k}};\theta_{k,m}(Q)\big{)}\Big{|}=\Big{|}\Phi_{m}\big{(}X_{t_{k}};\theta_{k,m,N}(Q) \big{)}-\Phi_{m}\big{(}X_{t_{k}};\beta_{k,m,N}(Q)\big{)}\Big{|}\xrightarrow[N \to+\infty]{}0.\] Then, since the set \(\mathcal{T}_{k+1}\) has a finite cardinal (discrete setting), we have \[\lim_{N\to+\infty}\ \sup_{Q\in\mathcal{T}_{k+1}}\Big{|}\Phi_{m}\big{(}X_{t_{k}}; \theta_{k,m,N}(Q)\big{)}-\Phi_{m}\big{(}X_{t_{k}};\theta_{k,m}(Q)\big{)}\Big{|} =0. \tag{3.22}\] Combining equations (3.19) and (3.22) in equation (3.17) yield the desired result. ### Deviation inequalities: the least squares setting To end this paper, we present some additional results related to the least squares approximation. These results focus on some deviation inequalities on the error between estimates (2.11), (2.6) and the swing actual value function (1.7). We no longer consider the Hilbert assumption \(\mathbf{\mathcal{H}_{5}^{LS}}\). Let us start with the first proposition of this section. **Proposition 3.12**.: _Let \(\delta>0\) and \(k=0,\ldots,n-2\). Under assumptions \(\mathbf{\mathcal{H}_{3,\infty}}\) and \(\mathbf{\mathcal{H}_{4,\infty}^{LS}}\), for all \(s\geq 2\), there exists a positive constant \(D_{s,k,m}\) such that,_ \[\mathbb{P}\left(\operatorname*{ess\,sup}_{Q\in\mathcal{Q}_{k}}\left|\frac{1} {N}\sum_{p=1}^{N}e^{m}(X_{t_{k}}^{[p]})V_{k+1}^{m}(X_{t_{k+1}}^{[p]},Q)- \mathbb{E}\big{(}e^{m}(X_{t_{k}})V_{k+1}^{m}(X_{t_{k+1}},Q)\big{)}\right|\geq \delta\right)\leq\frac{D_{s,k,m}}{\delta^{s}N^{\frac{s}{2}}}\] _where \(\mathcal{Q}_{k}\) is the set of all \(\mathcal{F}_{t_{k}}^{X}\)-measurable random variables lying within \(\mathcal{T}_{k+1}\)._ Proof.: Note that \(\mathcal{Q}_{k}\subset\mathcal{Q}_{k}^{\prime}\); with the latter set being the set of all \(\mathcal{F}_{t_{k+1}}^{X}\)-measurable random variables lying within \(\mathcal{T}_{k+1}\). Then we have, \[\mathbb{P}\left(\operatorname*{ess\,sup}_{Q\in\mathcal{Q}_{k}} \left|\frac{1}{N}\sum_{p=1}^{N}e^{m}(X_{t_{k}}^{[p]})V_{k+1}^{m}(X_{t_{k+1}}^ {[p]},Q)-\mathbb{E}\big{(}e^{m}(X_{t_{k}})V_{k+1}^{m}(X_{t_{k+1}},Q)\big{)} \right|\geq\delta\right)\] \[\leq\mathbb{P}\left(\operatorname*{ess\,sup}_{Q\in\mathcal{Q}_{k} ^{\prime}}\left|\frac{1}{N}\sum_{p=1}^{N}e^{m}(X_{t_{k}}^{[p]})V_{k+1}^{m}(X_{ t_{k+1}}^{[p]},Q)-\mathbb{E}\big{(}e^{m}(X_{t_{k}})V_{k+1}^{m}(X_{t_{k+1}},Q) \big{)}\right|\geq\delta\right)\] \[\leq A_{s}\frac{\sup_{Q\in\mathcal{Q}_{k}^{\prime}}\left\{\mathbb{ E}\Big{(}\big{|}e^{m}(X_{t_{k}})V_{k+1}^{m}(X_{t_{k+1}},Q)\big{|}^{s}\Big{)}+ \mathbb{E}\Big{(}\big{|}e^{m}(X_{t_{k}})V_{k+1}^{m}(X_{t_{k+1}},Q)\big{|}\Big{)} ^{s}\right\}}{N^{s/2}\cdot\delta^{s}}\] \[\leq 2A_{s}\frac{\sup_{Q\in\mathcal{Q}_{k}^{\prime}}\left.\mathbb{ E}\Big{(}\big{|}e^{m}(X_{t_{k}})V_{k+1}^{m}(X_{t_{k+1}},Q)\big{|}^{s}\Big{)}}{N^{s/2} \cdot\delta^{s}}\] where in the second-last inequality, we successively used Markov inequality, bifurcation property and Lemma 3.5 (enabled by assumptions \(\mathbf{\mathcal{H}_{3,\infty}}\) and \(\mathbf{\mathcal{H}_{4,\infty}^{LS}}\)) with \(A_{s}=B_{s}^{s}\cdot 2^{s-1}\) and \(B_{s}\) being a positive constant which only depends on \(a\). To obtain the last inequality, we used Jensen inequality. Besides, following the definition of \(\mathcal{Q}_{k}^{\prime}\) we have, \[\sup_{Q\in\mathcal{Q}_{k}^{\prime}}\mathbb{E}\Big{(}\big{|}e^{m}(X_{t_{k}})V_{k+ 1}^{m}(X_{t_{k+1}},Q)\big{|}^{s}\Big{)}\leq\sup_{Q\in\mathcal{T}_{k+1}} \mathbb{E}\Big{(}\big{|}e^{m}(X_{t_{k}})V_{k+1}^{m}(X_{t_{k+1}},Q)\big{|}^{s} \Big{)}.\] Then owing to Remark 2.1, the right hand side of the last inequality is a supremum of a continuous function over a compact set; thus finite. Hence it suffices to set, \[D_{s,k,m}:=2A_{s}\cdot\sup_{Q\in\mathcal{T}_{k+1}}\mathbb{E}\Big{(}\big{|}e^{m }(X_{t_{k}})V_{k+1}^{m}(X_{t_{k+1}},Q)\big{|}^{s}\Big{)}<+\infty.\] Which completes the proof. In the following proposition, we state a deviation inequality connecting the estimates of the orthogonal projection coordinates involved in the least squares regression. **Proposition 3.13**.: _Consider assumptions \(\boldsymbol{\mathcal{H}_{3,\infty}}\) and \(\boldsymbol{\mathcal{H}_{4,\infty}^{LS}}\). For all \(k=0,\ldots,n-2\), \(\delta>0\) and \(s\geq 2\) there exists a positive constant \(C_{s,k,m}\) such that,_ \[\mathbb{P}\left(\operatorname*{ess\,sup}_{Q\in\mathcal{Q}_{k}}\,\big{|}\theta _{k,m,N}(Q)-\theta_{k,m}(Q)\big{|}\geq\delta\right)\leq\frac{C_{s,k,m}}{b(s, \delta)\cdot N^{\frac{s}{2}}}\] _where \(b(s,\delta)=\delta^{s}\) if \(\delta\in(0,1]\) else \(b(s,\delta)=\delta^{s/2}\)._ Proof.: We proceed by a backward induction on \(k\). Recall that, for any \(Q\in\mathcal{T}_{n-1}\), \(V_{n-1}^{m,N}(\cdot,Q)=V_{n-1}^{m}(\cdot,Q)\). Thus, it follows from triangle inequality, \[\big{|}\theta_{n-2,m,N}(Q)-\theta_{n-2,m}(Q)\big{|} =\Big{|}\big{(}A_{m,N}^{n-2}\big{)}^{-1}\frac{1}{N}\sum_{p=1}^{N} e^{m}(X_{t_{n-2}}^{[p]})V_{n-1}^{m}(X_{t_{n-1}}^{[p]},Q)-\big{(}A_{m}^{n-2} \big{)}^{-1}\mathbb{E}\left(e^{m}(X_{t_{n-2}})V_{n-1}^{m}(X_{t_{n-1}},Q) \right)\Big{|}\] \[\leq\Big{|}\big{(}A_{m,N}^{n-2}\big{)}^{-1}\Big{(}\frac{1}{N} \sum_{p=1}^{N}e^{m}(X_{t_{n-2}}^{[p]})V_{n-1}^{m}(X_{t_{n-1}}^{[p]},Q)-\mathbb{ E}\big{(}e^{m}(X_{t_{n-2}})V_{n-1}^{m}(X_{t_{n-1}},Q)\big{)}\Big{)}\Big{|}\] \[+\Big{|}\Big{(}\big{(}A_{m,N}^{n-2}\big{)}^{-1}-\big{(}A_{m}^{n-2} \big{)}^{-1}\Big{)}\cdot\mathbb{E}\big{(}e^{m}(X_{t_{n-2}})V_{n-1}^{m}(X_{t_{n -1}},Q)\big{)}\Big{|}\] \[=\Big{|}\big{(}A_{m,N}^{n-2}\big{)}^{-1}\Big{(}\frac{1}{N}\sum_{p =1}^{N}e^{m}(X_{t_{n-2}}^{[p]})V_{n-1}^{m}(X_{t_{n-1}}^{[p]},Q)-\mathbb{E}(e^{ m}(X_{t_{n-2}})V_{n-1}^{m}(X_{t_{n-1}},Q))\Big{)}\Big{|}\] \[+\Big{|}\Big{(}\big{(}A_{m}^{n-2}\big{)}^{-1}\big{(}A_{m}^{n-2}-A_{ m,N}^{n-2}\big{)}\big{(}A_{m,N}^{n-2}\big{)}^{-1}\Big{)}\,\mathbb{E}\left(e^{m}(X_{t_{n-2}}) V_{n-1}^{m}(X_{t_{n-1}},Q)\right)\Big{|}\] where in the last equality we used the matrix identity \(A^{-1}-B^{-1}=B^{-1}(B-A)A^{-1}\) for all non-singular matrices \(A,B\). Hence taking the essential supremum and keeping in mind that the matrix norm \(|\cdot|\) is submultiplicative yields, \[\operatorname*{ess\,sup}_{Q\in\mathcal{Q}_{n-2}}\,\big{|}\theta_{n -2,m,N}(Q)-\theta_{n-2,m}(Q)\big{|}\] \[\leq\Big{|}\big{(}A_{m,N}^{n-2}\big{)}^{-1}\Big{|}\cdot \operatorname*{ess\,sup}_{Q\in\mathcal{Q}_{n-2}}\,\Big{|}\frac{1}{N}\sum_{p=1} ^{N}e^{m}(X_{t_{n-2}}^{[p]})V_{n-1}^{m}(X_{t_{n-1}}^{[p]},Q)-\mathbb{E}\big{(} e^{m}(X_{t_{n-2}})V_{n-1}^{m}(X_{t_{n-1}},Q)\big{)}\Big{|}\] \[+C_{n-2}\cdot\Big{|}\big{(}A_{m}^{n-2}\big{)}^{-1}\big{(}A_{m}^{n- 2}-A_{m,N}^{n-2}\big{)}\big{(}A_{m,N}^{n-2}\big{)}^{-1}\Big{|}\] where \(C_{n-2}:=\sup_{Q\in\mathcal{T}_{n-1}}\left|\mathbb{E}\left(e^{m}(X_{t_{n-2}})V_{n-1 }^{m}(X_{t_{n-1}},Q)\right)\big{|}<+\infty\). For any \(\varepsilon>0\) and \(k=0,\ldots,n-2\), denote by \(\Omega_{k}^{\varepsilon}:=\left\{\big{|}A_{m,N}^{k}-A_{m}^{k}\big{|}\leq \varepsilon\right\}\). Then one may choose \(\varepsilon\) such that \(\big{|}(A_{m,N}^{k})^{-1}\big{|}\leq 2\big{|}(A_{m}^{k})^{-1}\big{|}\) on \(\Omega_{k}^{\varepsilon}\). Thus there exists positive constants \(K_{1},K_{2}\) such that on \(\Omega_{n-2}^{\varepsilon}\), \[\operatorname*{ess\,sup}_{Q\in\mathcal{Q}_{n-2}}\,\Big{|}\theta _{n-2,m,N}(Q)-\theta_{n-2,m}(Q)\Big{|}\] \[\leq K_{1}\cdot\operatorname*{ess\,sup}_{Q\in\mathcal{Q}_{n-2}} \,\Big{|}\frac{1}{N}\sum_{p=1}^{N}e^{m}(X_{t_{n-2}}^{[p]})V_{n-1}^{m}(X_{t_{n -1}}^{[p]},Q)-\mathbb{E}\big{(}e^{m}(X_{t_{n-2}})V_{n-1}^{m}(X_{t_{n-1}},Q) \big{)}\Big{|}+K_{2}\cdot\varepsilon.\] Therefore, the law of total probability yields, \[\mathbb{P}\Big{(}\operatorname*{ess\,sup}_{Q\in\mathcal{Q}_{n-2} }\,\big{|}\theta_{n-2,m,N}(Q)-\theta_{n-2,m}(Q)\big{|}\geq\delta\Big{)}\] \[\leq\mathbb{P}\left(\operatorname*{ess\,sup}_{Q\in\mathcal{Q}_{n- 2}}\,\Big{|}\frac{1}{N}\sum_{p=1}^{N}e^{m}(X_{t_{n-2}}^{[p]})V_{n-1}^{m}(X_{t_{ n-1}}^{[p]},Q)-\mathbb{E}\big{(}e^{m}(X_{t_{n-2}})V_{n-1}^{m}(X_{t_{n-1}},Q) \big{)}\Big{|}\geq\frac{\delta-K_{2}\cdot\varepsilon}{K_{1}}\right)+\mathbb{P} \Big{(}\big{(}\Omega_{n-2}^{\varepsilon}\big{)}^{c}\Big{)}\] \[\leq\frac{D_{s,n-2,m}}{(\delta-K_{2}\cdot\varepsilon)^{s}N^{ \frac{\varepsilon}{2}}}+\frac{U_{n-2,m}}{\varepsilon^{s}N^{\frac{\varepsilon}{ 2}}}\] where the majoration for the first probability in the second-last line comes from Proposition 3.12 and constant \(D_{a,n-2,m}\) embeds constant \(K_{1}\). The majoration of \(\mathbb{P}\Big{(}\big{(}\Omega_{n-2}^{\varepsilon}\big{)}^{c}\Big{)}\) is straightforward using successively Markov inequality and Lemma 3.5. Then, choosing \(\varepsilon=\rho\delta\) for some \(\rho>0\) sufficiently small yields, \[\mathbb{P}\left(\operatorname*{ess\,sup}_{Q\in\mathcal{Q}_{n-2}}\,\big{|} \theta_{n-2,m,N}(Q)-\theta_{n-2,m}(Q)\big{|}\geq\delta\right)\leq\frac{C_{s,n- 2,m}}{\delta^{s}N^{\frac{\varepsilon}{2}}}\leq\left\{\begin{array}{ll}\frac{ C_{s,n-2,m}}{\delta^{s}N^{s/2}}&\text{ if }\,\delta\in(0,1],\\ \frac{C_{s,n-2,m}}{\delta^{s/2}N^{s/2}}&\text{ else }\end{array}\right..\] for some positive constant \(C_{a,n-2,m}\). Now let us assume that the proposition holds for \(k+1\) and show that it also holds for \(k\). For any \(Q\in\mathcal{T}_{k+1}\), it follows from triangle inequality that, \[\big{|}\theta_{k,m,N}(Q)-\theta_{k,m}(Q)\big{|} \leq\big{|}(A_{m,N}^{k})^{-1}\big{|}\cdot\Big{|}\frac{1}{N}\sum_ {p=1}^{N}e^{m}(X_{t_{k}}^{[p]})\Big{(}V_{k+1}^{m,N}(X_{t_{k+1}}^{[p]},Q)-V_{k+1 }^{m}(X_{t_{k+1}}^{[p]},Q)\Big{)}\Big{|}\] \[+\big{|}(A_{m}^{k})^{-1}(A_{m}^{k}-A_{m,N}^{k})(A_{m,N}^{k})^{-1} \cdot\mathbb{E}\big{(}e^{m}(X_{t_{k}})V_{k+1}^{m}(X_{t_{k+1}},Q)\big{)}\Big{|}\] \[\leq\big{|}(A_{m,N}^{k})^{-1}\big{|}\cdot\frac{1}{N}\sum_{p=1}^{N} \big{|}e^{m}(X_{t_{k}}^{[p]})\big{|}\cdot\Big{|}V_{k+1}^{m,N}(X_{t_{k+1}}^{[p] },Q)-V_{k+1}^{m}(X_{t_{k+1}}^{[p]},Q)\Big{|}\] \[+\big{|}(A_{m,N}^{k})^{-1}\big{|}\cdot\Big{|}\frac{1}{N}\sum_{p=1} ^{N}e^{m}(X_{t_{k}}^{[p]})V_{k+1}^{m}(X_{t_{k+1}}^{[p]},Q)-\mathbb{E}\left(e^{ m}(X_{t_{k}})V_{k+1}^{m}(X_{t_{k+1}},Q)\right)\Big{|}\] \[+\big{|}(A_{m}^{k})^{-1}(A_{m}^{k}-A_{m,N}^{k})(A_{m,N}^{k})^{-1} \cdot\mathbb{E}\big{(}e^{m}(X_{t_{k}})V_{k+1}^{m}(X_{t_{k+1}},Q)\big{)}\Big{|}.\] But for all \(1\leq p\leq N\), Cauchy-Schwartz inequality yields, \[\Big{|}V_{k+1}^{m,N}(X_{t_{k+1}}^{[p]},Q)-V_{k+1}^{m}(X_{t_{k+1}}^{[p]},Q) \Big{|} \leq\operatorname*{ess\,sup}_{q\in Adm(t_{k+1},Q)}\left\langle \theta_{k+1,m,N}(Q+q)-\theta_{k+1,m}(Q+q),e^{m}(X_{t_{k+1}}^{[p]})\right\rangle\] \[\leq\big{|}e^{m}(X_{t_{k+1}}^{[p]})\big{|}\cdot\operatorname*{ ess\,sup}_{q\in Adm(t_{k+1},Q)}\big{|}\theta_{k+1,m,N}(Q+q)-\theta_{k+1,m}(Q+q) \big{|}.\] Thus, \[\big{|}\theta_{k,m,N}(Q)-\theta_{k,m}(Q)\big{|} \leq\left(\frac{\big{|}(A_{m,N}^{k})^{-1}\big{|}}{N}\sum_{p=1}^{N }\big{|}e^{m}(X_{t_{k}}^{[p]})\big{|}\cdot\big{|}e^{m}(X_{t_{k+1}}^{[p]}) \big{|}\right)\operatorname*{ess\,sup}_{q\in Adm(t_{k+1},Q)}\big{|}\theta_{k+1,m,N}(Q+q)-\theta_{k+1,m}(Q+q)\big{|}\] \[+\big{|}(A_{m,N}^{k})^{-1}\big{|}\cdot\big{|}\frac{1}{N}\sum_{p=1 }^{N}e^{m}(X_{t_{k}}^{[p]})V_{k+1}^{m}(X_{t_{k+1}}^{[p]},Q)-\mathbb{E}\left(e^ {m}(X_{t_{k}})V_{k+1}^{m}(X_{t_{k+1}},Q)\right)\big{|}\] \[+\big{|}(A_{m}^{k})^{-1}(A_{m}^{k}-A_{m,N}^{k})(A_{m,N}^{k})^{-1} \cdot\mathbb{E}\big{(}e^{m}(X_{t_{k}})V_{k+1}^{m}(X_{t_{k+1}},Q)\big{)}\big{|}.\] Therefore, on \(\Omega_{k}^{\varepsilon}\), there exists some positive constants \(K_{1},K_{2},K_{3}\) such that, \[\operatorname*{ess\,sup}_{Q\in\mathcal{Q}_{k}}\big{|}\theta_{k,m, N}(Q)-\theta_{k,m}(Q)\big{|}\] \[\leq K_{1}\underbrace{\left(\frac{1}{N}\sum_{p=1}^{N}\big{|}e^{m} (X_{t_{k}}^{[p]})\big{|}\cdot\big{|}e^{m}(X_{t_{k+1}}^{[p]})\big{|}\right)} _{I_{N}^{1}}\underbrace{\operatorname*{ess\,sup}_{Q\in\mathcal{Q}_{k+1}}\big{|} \theta_{k+1,m,N}(Q)-\theta_{k+1,m}(Q)\big{|}}_{I_{N}^{2}}\] \[+K_{2}\cdot\underbrace{\operatorname*{ess\,sup}_{Q\in\mathcal{Q}_ {k+1}}\big{|}\frac{1}{N}\sum_{p=1}^{N}e^{m}(X_{t_{k}}^{[p]})V_{k+1}^{m}(X_{t_ {k+1}}^{[p]},Q)-\mathbb{E}\left(e^{m}(X_{t_{k}})V_{k+1}^{m}(X_{t_{k+1}},Q) \right)\big{|}}_{I_{N}^{3}}+K_{3}\cdot\varepsilon\] where to obtain the coefficient \(K_{3}\) in the last inequality, we used the fact that, \[\operatorname*{ess\,sup}_{Q\in\mathcal{Q}_{k+1}}\mathbb{E}\left(e^{m}(X_{t_{k} })V_{k+1}^{m}(X_{t_{k+1}},Q)\right)\leq\sup_{Q\in\mathcal{T}_{k+1}}\mathbb{E} \left(e^{m}(X_{t_{k}})V_{k+1}^{m}(X_{t_{k+1}},Q)\right)<+\infty.\] The term \(I_{N}^{3}\) can be handled using Proposition 3.12. Then, it suffices to prove that, \[\mathbb{P}\big{(}I_{N}^{1}\cdot I_{N}^{2}\geq\delta\big{)}\leq\frac{K}{\delta ^{a}\cdot N^{a/2}}\] for some positive constant \(K\). But we have, \[\mathbb{P}\big{(}I_{N}^{1}\cdot I_{N}^{2}\geq\delta\big{)}=1-\mathbb{P}\big{(} I_{N}^{1}\cdot I_{N}^{2}\leq\delta\big{)}\leq 1-\mathbb{P}\big{(}I_{N}^{1}\leq\sqrt{\delta};I_{N}^{2}\leq \sqrt{\delta}\big{)}\leq\mathbb{P}\big{(}I_{N}^{1}\geq\sqrt{\delta}\big{)}+ \mathbb{P}\big{(}I_{N}^{2}\geq\sqrt{\delta}\big{)}. \tag{3.23}\] Moreover, by the induction assumption, we know that, there exists a positive constant \(B_{a,k,m}\) such that, \[\mathbb{P}\big{(}I_{N}^{2}\geq\sqrt{\delta}\big{)}\leq\frac{B_{s,k,m}}{\delta ^{s/2}N^{s/2}}\leq\left\{\begin{array}{ll}\frac{B_{s,k,m}}{\delta^{s/2}N^{s /2}}&\text{if }\,\delta\in(0,1],\\ \frac{B_{s,k,m}}{\delta^{s/2}N^{s/2}}&\text{otherwise.}\end{array}\right.\] In addition, it follows from Markov inequality and Lemma 3.5 that there exists a positive constant \(M_{a,k,m}\) such that \[\mathbb{P}\big{(}I_{N}^{1}\geq\sqrt{\delta}\big{)}\leq\frac{M_{s,k,m}}{\delta^{s}N ^{s/2}}\leq\left\{\begin{array}{ll}\frac{M_{s,k,m}}{\delta^{s}N^{s/2}}&\text{ if }\,\delta\in(0,1],\\ \frac{M_{s,k,m}}{\delta^{s/2}N^{s/2}}&\text{ otherwise.}\end{array}\right.\] Hence, there exists a positive constant \(C_{s,k,m}\) such that, \[\mathbb{P}\big{(}I_{N}^{1}\cdot I_{N}^{2}\geq\delta\big{)}\leq\left\{ \begin{array}{ll}\frac{C_{s,k,m}}{\delta^{s}N^{s/2}}&\text{ if }\,\delta\in(0,1],\\ \frac{C_{s,k,m}}{\delta^{s/2}N^{s/2}}&\text{ otherwise}\end{array}\right.\] and this completes the proof. We now state the last result of this paper concerning a deviation inequality involving the actual swing value function. **Proposition 3.14**.: _Consider assumptions \(\boldsymbol{\mathcal{H}_{3,\infty}}\) and \(\boldsymbol{\mathcal{H}_{4,\infty}^{LS}}\). For all \(k=0,\ldots,n-2\), \(\delta>0\) and \(s\geq 2\) there exists a positive constant \(C_{s,k,m}\) such that,_ \[\mathbb{P}\left(\operatorname*{ess\,sup}_{Q\in\mathcal{Q}_{k}}\,\left|V_{k}^ {m,N}\big{(}X_{t_{k}},Q\big{)}-V_{k}^{m}\big{(}X_{t_{k}},Q\big{)}\right|\geq \delta\right)\leq\frac{C_{s,k,m}}{b(s,\delta)\cdot N^{\frac{s}{2}}}.\] Proof.: Using the inequality, \(\left|\sup_{i\in I}a_{i}-\sup_{i\in I}b_{i}\right|\,\leq\,\sup_{i\in I}|a_{i}- b_{i}|\) and then Cauchy-Schwartz' inequality, we have, \[\operatorname*{ess\,sup}_{Q\in\mathcal{Q}_{k}}\,\left|V_{k}^{m,N}\big{(}X_{t_ {k}},Q\big{)}-V_{k}^{m}\big{(}X_{t_{k}},Q\big{)}\right|\leq\big{|}e^{m}(X_{t_ {k}})\big{|}\cdot\operatorname*{ess\,sup}_{Q\in\mathcal{Q}_{k+1}}\,\Big{|} \theta_{k+1,m,N}(Q)-\theta_{k+1,m}(Q)\Big{|}.\] Thus, using the same argument as in (3.23), we get, \[\mathbb{P}\left(\operatorname*{ess\,sup}_{Q\in\mathcal{Q}_{k}}\, \left|V_{k}^{m,N}\big{(}X_{t_{k}},Q\big{)}-V_{k}^{m}\big{(}X_{t_{k}},Q\big{)} \right|\geq\delta\right)\] \[\leq\mathbb{P}\left(\big{|}e^{m}(X_{t_{k}})\big{|}\cdot \operatorname*{ess\,sup}_{Q\in\mathcal{Q}_{k+1}}\,\left|\theta_{k+1,m,N}(Q)- \theta_{k+1,m}(Q)\right|\geq\delta\right)\] \[\leq\mathbb{P}\Big{(}\big{|}e^{m}(X_{t_{k}})\big{|}\geq\sqrt{ \delta}\Big{)}+\mathbb{P}\left(\operatorname*{ess\,sup}_{Q\in\mathcal{Q}_{k+1 }}\,\left|\theta_{k+1,m,N}(Q)-\theta_{k+1,m}(Q)\right|\geq\sqrt{\delta}\right)\] \[\leq\frac{K_{s,k,m}^{1}}{\delta^{s/2}\cdot N^{s/2}}+\frac{K_{s,k,m} ^{2}}{b(s,\delta)\cdot N^{s/2}}\leq\left\{\begin{array}{ll}\frac{K_{s,k,m}}{ \delta^{s}N^{s/2}}&\text{ if }\,\delta\in(0,1]\\ \frac{K_{s,k,m}}{\delta^{s/2}N^{s/2}}&\text{ otherwise}\end{array}\right.\] for some positive constant \(K_{s,k,m}\), where the constant \(K_{s,k,m}^{1}\) comes from Markov inequality (enabled by assumption \(\boldsymbol{\mathcal{H}_{4,\infty}^{LS}}\)). The existence of the positive constant \(K_{s,k,m}^{2}\) results from Proposition 3.13 (enabled by assumptions \(\boldsymbol{\mathcal{H}_{3,\infty}}\) and \(\boldsymbol{\mathcal{H}_{4,\infty}^{LS}}\)). The coefficient \(b(a,\delta)\) is also defined in Proposition 3.13. This completes the proof. **Remark 3.15**.: _The preceding proposition entails the following result as a straightforward corollary. For all \(k=0,\ldots,n-1\) and for any \(Q\in\mathcal{T}_{k}\), we have,_ \[\mathbb{P}\left(\left|V_{k}^{m,N}\big{(}X_{t_{k}},Q\big{)}-V_{k}^{m}\big{(}X_ {t_{k}},Q\big{)}\right|\geq\delta\right)\leq\frac{C_{s,k,m}}{b(s,\delta)\cdot N ^{\frac{s}{2}}}.\] _If we assume that \(\sup_{m\geq 1}C_{s,k,m}<+\infty\), then for any \(s\geq 2\), we have the following uniform convergence,_ \[\lim_{N\rightarrow+\infty}\sup_{m\geq 1}\sup_{Q\in\mathcal{T}_{k}}\mathbb{P} \left(\left|V_{k}^{m,N}\big{(}X_{t_{k}},Q\big{)}-V_{k}\big{(}X_{t_{k}},Q\big{)} \right|\geq\delta\right)=0. \tag{3.24}\] _But it follows from triangle inequality that,_ \[\mathbb{P}\left(\left|V_{k}^{m,N}\big{(}X_{t_{k}},Q\big{)}-V_{k} \big{(}X_{t_{k}},Q\big{)}\right|\geq\delta\right)\] \[=1-\mathbb{P}\left(\left|V_{k}^{m,N}\big{(}X_{t_{k}},Q\big{)}-V_ {k}\big{(}X_{t_{k}},Q\big{)}\right|\leq\delta\right)\] \[\leq\mathbb{P}\Big{(}\left|V_{k}^{m,N}(X_{t_{k}},Q)-V_{k}^{m}(X_ {t_{k}},Q)\right|\geq\delta/2\Big{)}+\mathbb{P}\Big{(}\left|V_{k}^{m}(X_{t_{k} },Q)-V_{k}(X_{t_{k}},Q)\right|\geq\delta/2\Big{)}\] \[\leq\mathbb{P}\Big{(}\left|V_{k}^{m,N}(X_{t_{k}},Q)-V_{k}^{m}(X_ {t_{k}},Q)\right|\geq\delta/2\Big{)}+4\cdot\frac{\left|\left|V_{k}^{m}(X_{t_{ k}},Q)-V_{k}(X_{t_{k}},Q)\right|\right|_{2}^{2}}{\delta^{2}},\] _where in the last inequality, we used Markov inequality. Then using Proposition 3.2 and result (3.24) yields,_ \[\lim_{m\rightarrow+\infty}\lim_{N\rightarrow+\infty}\sup_{Q\in \mathcal{T}_{k}}\mathbb{P}\left(\left|V_{k}^{m,N}\big{(}X_{t_{k}},Q\big{)}-V_ {k}\big{(}X_{t_{k}},Q\big{)}\right|\geq\delta\right)=0.\] _The latter result implies that for a well-chosen and sufficiently large regression basis, the limit,_ \[\lim_{N\rightarrow+\infty}\sup_{Q\in\mathcal{T}_{k}}\mathbb{P} \left(\left|V_{k}^{m,N}\big{(}X_{t_{k}},Q\big{)}-V_{k}\big{(}X_{t_{k}},Q\big{)} \right|\geq\delta\right)\] _may be arbitrary small insuring in some sense the theoretical effectiveness of the least squares procedure in the context of swing pricing._ ## Acknowledgments The author would like to thank Gilles Pages and Vincent Lemaire for fruitful discussions. The author would also like to express his gratitude to Engie Global Markets for funding his PhD thesis.
2308.06582
Gated Attention Coding for Training High-performance and Efficient Spiking Neural Networks
Spiking neural networks (SNNs) are emerging as an energy-efficient alternative to traditional artificial neural networks (ANNs) due to their unique spike-based event-driven nature. Coding is crucial in SNNs as it converts external input stimuli into spatio-temporal feature sequences. However, most existing deep SNNs rely on direct coding that generates powerless spike representation and lacks the temporal dynamics inherent in human vision. Hence, we introduce Gated Attention Coding (GAC), a plug-and-play module that leverages the multi-dimensional gated attention unit to efficiently encode inputs into powerful representations before feeding them into the SNN architecture. GAC functions as a preprocessing layer that does not disrupt the spike-driven nature of the SNN, making it amenable to efficient neuromorphic hardware implementation with minimal modifications. Through an observer model theoretical analysis, we demonstrate GAC's attention mechanism improves temporal dynamics and coding efficiency. Experiments on CIFAR10/100 and ImageNet datasets demonstrate that GAC achieves state-of-the-art accuracy with remarkable efficiency. Notably, we improve top-1 accuracy by 3.10\% on CIFAR100 with only 6-time steps and 1.07\% on ImageNet while reducing energy usage to 66.9\% of the previous works. To our best knowledge, it is the first time to explore the attention-based dynamic coding scheme in deep SNNs, with exceptional effectiveness and efficiency on large-scale datasets.The Code is available at https://github.com/bollossom/GAC.
Xuerui Qiu, Rui-Jie Zhu, Yuhong Chou, Zhaorui Wang, Liang-jian Deng, Guoqi Li
2023-08-12T14:42:02Z
http://arxiv.org/abs/2308.06582v2
# Gated Attention Coding for Training High-performance and Efficient Spiking Neural Networks ###### Abstract Spiking neural networks (SNNs) are emerging as an energy-efficient alternative to traditional artificial neural networks (ANNs) due to their unique spike-based event-driven nature. Coding is crucial in SNNs as it converts external input stimuli into spatio-temporal feature sequences. However, most existing deep SNNs rely on direct coding that generates powerless spike representation and lacks the temporal dynamics inherent in human vision. Hence, we introduce Gated Attention Coding (GAC), a plug-and-play module that leverages the multi-dimensional gated attention unit to efficiently encode inputs into powerful representations before feeding them into the SNN architecture. GAC functions as a preprocessing layer that does not disrupt the spike-driven nature of the SNN, making it amenable to efficient neuromorphic hardware implementation with minimal modifications. Through an observer model theoretical analysis, we demonstrate GAC's attention mechanism improves temporal dynamics and coding efficiency. Experiments on CIFAR10/100 and ImageNet datasets demonstrate that GAC achieves state-of-the-art accuracy with remarkable efficiency. Notably, we improve top-1 accuracy by 3.10% on CIFAR100 with only 6-time steps and 1.07% on ImageNet while reducing energy usage to 66.9% of the previous works. To our best knowledge, it is the first time to explore the attention-based dynamic coding scheme in deep SNNs, with exceptional effectiveness and efficiency on large-scale datasets. ## Introduction Artificial neural networks (ANNs) have garnered remarkable acclaim for their potent representation and astounding triumphs in a plethora of artificial intelligence domains such as computer vision [16], natural language processing [14] and big data applications [20]. Nonetheless, this comes at a significant cost in terms of energy consumption. In contrast, spiking neural networks (SNNs) exhibits heightened biological plausibility [11], spike-driven nature, and low power consumption on neuromorphic hardware, e.g., TrueNorth [15], Loihi [17], Tianjic [21]. Moreover, the versatility of SNNs extends to various tasks, including image classification [13, 14], image reconstruction [22], and language generation [23], although the majority of their applications currently lie within the field of computer vision. To integrate SNNs into the realm of computer vision, the initial challenge lies in transforming static images into spatio-temporal feature sequences. Various coding schemes have emerged to address this issue, such as rate coding [20], temporal coding [18], and phase coding [19]. Among these, direct coding [20] as shown in Fig. 1-(a), excel in training SNNs on large-scale datasets with minimal simulation time steps. Figure 1: How our Gated Attention Coding (GAC) differs from existing SNNs’ coding [20] and attention methods [21, 22]. In (a), the solid-colored cube represents the float values, the gray cube denotes the binary spike values, and the cube with the dotted line represents the sparse values. In comparison with direct coding, GAC generates spatio-temporal dynamics output with powerful representation. In (b), compared to other attention methods, GAC only adds the attention module to the encoder without requiring \(N\) Multiply-Accumulation (MAC) blocks for dynamically calculating attention scores in subsequent layers. Moreover, by adopting direct coding, recent SNN models Li et al. (2021); Shen et al. (2023); Zhou et al. (2023) achieve state-of-the-art performance across various datasets, showcasing the immense potential of coding techniques. However, this approach utilizes a trainable layer to generate float values repetitively at each time step. The repetitive nature of direct coding leads to periodic identical outputs at every time step, generating powerless spike representation and limiting spatio-temporal dynamics. In addition, the repetitive nature of direct coding fails to generate the temporal dynamics inherent in human vision, which serves as the fundamental inspiration for SNN models. Human vision is characterized by its ability to process and perceive dynamic visual stimuli over time. The repetitive nature of direct coding falls short in replicating this crucial aspect, emphasizing the need for alternative coding schemes that can better emulate the temporal dynamics observed in human vision. Humans can naturally and effectively find salient regions in complex scenes Itti et al. (1998). Motivated by this observation, attention mechanisms have been introduced into deep learning and achieved remarkable success in a wide spectrum of application domains, which is also worth to be explored in SNNs as shown in Fig. 1-(b) Yao et al. (2021); Zhou et al. (2023). However, it has been observed that implementing attention mechanisms to directly modify membrane potential and dynamically compute attention scores for each layer, rather than using static weights, disrupts the fundamental asynchronous spike-driven communication in these methods. Consequently, this approach falls short of providing full support for neuromorphic hardware. In this paper, we investigate the shortcomings of traditional direct coding and introduce an innovative approach termed Gated Attention Coding (GAC) as depicted in Fig. 1. Instead of producing periodic and powerless results, GAC leverages a multi-dimensional attention mechanism for gating to elegantly generate powerful temporal dynamic encodings from static datasets. As a preprocessing layer, GAC doesn't disrupt the SNNs' spike-driven, enabling efficient neuromorphic hardware implementation with minimal modifications. Experimental results demonstrate that our GAC not only significantly enhances the performance of SNNs, but also notably reduces latency and energy consumption. Moreover, our main contributions can be summarized as follows: * We propose an observer model to theoretically analyze direct coding limitations and introduce the GAC scheme, a plug-and-play preprocessing layer decoupled from the SNN architecture, preserving its spike-driven nature. * We evaluate the feasibility of GAC and depict the encoding result under both direct coding and GAC setups to demonstrate the powerful representation of GAC and its advantage in generating spatio-temporal dynamics. * We demonstrate the effectiveness and efficiency of the proposed method on the CIFAR10/100 and ImageNet datasets with spike-driven nature. Our method outperforms the previous state-of-the-art and shows significant improvements across all test datasets within fewer time steps. ## Related Works ### Bio-inspired Spiking Neural Networks Spiking Neural Networks (SNNs) offer a promising approach to achieving energy-efficient intelligence. These networks aim to replicate the behavior of biological neurons by employing binary spiking signals, where a value of 0 indicates no activity and a value of 1 represents a spiking event. The spike-driven communication paradigm in SNNs is inspired by the functionality of biological neurons and holds the potential for enabling energy-efficient computational systems Roy et al. (2019). By incorporating advanced deep learning and neuroscience knowledge, SNNs can offer significant benefits for a wide range of applications Jin et al. (2022); Qiu et al. (2023); Wu et al. (2023). Recently, there exist two primary methods of training high-performance SNNs. One way is to discretize ANN into spike form through neuron equivalence Li et al. (2021); Hao et al. (2023), i.e., ANN-to-SNN conversion, but this requires a long simulation time step and boosts the energy consumption. We employ the direct training method Wu et al. (2018) and apply surrogate gradient training. ### SNN Coding Schemes Numerous coding schemes are proposed for image classification tasks. Phase coding Kim et al. (2018) used a weighted spike to encode each pixel and temporal coding Park et al. (2020); Comsa et al. (2020); Zhou et al. (2021) represents information with the firing time of the first neuron spike. These methods have been successfully applied to simple datasets with shallow networks, but achieving high performance becomes more challenging as datasets and networks become larger and more complex. To address this issue, rating coding Van Rullen and Thorpe (2001), which encodes each pixel using spike firing frequency, has been suggested. However, it suffers from long time steps to remain high performance, while small time steps result in lower representation resolution. To overcome these limitations, Wu et al. (2019) proposed the direct coding, in which input is given straight to the network without conversion to spikes and image-spike encoding is done by the first \(\{\textit{Conv-BN}\}\) layer. Then repeat this procedure at each time step and feed the results to spiking neurons. Finally, these encoded spikes will be sent to the SNNs' architecture for feature extraction. However, the limited powerless spike representation in SNNs using direct coding leads to parameter sensitivity and subpar performance. The repetition operation fails to generate dynamic output and neglects redundant data, thus underutilizing the spatio-temporal extraction ability of subsequent SNN architectures and increasing energy consumption in neuromorphic hardware. ### Attention Mechanism Initially introduced to enhance the performance of sequence-to-sequence tasks Bahdanau et al. (2014), the attention mechanism is a powerful technique that enables improved processing of pertinent information Ioffe and Szegedy (2015). By effectively filtering out distracting noise, the attention mechanism facilitates more focused and efficient data processing, leading to enhanced performance in various applications. Yao et al. (2021) attach the squeeze-and-excitation Hu et al. (2018) attention block to the temporal-wise input of SNN, assessing the significance over different frames during training and discarding irrelevant frames during inferencing. However, this method only gets better performance on small datasets with shallow networks. Yao et al. (2023) switch CBAM attention (Woo et al., 2018) to multi-dimension attention and inject it in SNNs architecture, revealing deep SNNs' potential as a general architecture to support various applications. Currently, integrating attention blocks into SNN architectures using sparse addition neuromorphic hardware poses challenges, as it necessitates creating numerous multiplication blocks in subsequent layers to dynamically compute attention scores, which could impede the spike-driven nature of SNNs. A potential solution to address this issue involves confining the application of attention mechanisms solely to the encoder, i.e., the first layer of the SNNs. By limiting the attention modifications to the initial stage, the subsequent layers can still maintain the essential spike-driven communication. This approach holds promise in enabling a more feasible implementation of SNNs on neuromorphic hardware, as it mitigates the incompatibility arising from dynamic attention mechanisms throughout the architecture. ## Method In this section, we first introduce the iterative spiking neuron model. Then we proposed the Gated Attention Coding (GAC) and Gated Attention Unit (GAU) as the basic block of it. Next, we provide the overall framework for training GAC-SNNs. Moreover, we conduct a comprehensive analysis of the direct coding scheme and explain why our GAC outperforms in generating spatio-temporal dynamics encoding results. ### Iterative Spiking Neuron Model We adopt the Leaky Integrate-and-Fire (LIF) spiking neuron model and translate it to an iterative expression with the Euler method (Wu et al., 2018; Yao et al., 2021). Mathematically, the LIF-SNN layer can be described as an explicitly iterable version for better computational traceability: \[\begin{cases}\mathbf{U}^{t,n}=\mathbf{H}^{t-1,n}+f(\mathbf{W^{n}},\mathbf{X}^{t,n-1})\\ \mathbf{S}^{t,n}=\Theta(\mathbf{U}^{t,n}-\mathbf{V}_{th})\\ \mathbf{H}^{t,n}=\tau\mathbf{U}^{t,n}\cdot(1-\mathbf{S}^{t,n})+\mathbf{V}_{reset}\mathbf{S}^{t,n}, \end{cases} \tag{1}\] where \(\tau\) is the time constant, \(t\) and \(n\) respectively represent the indices of the time step and the \(n\)-th layer, \(\mathbf{W}\) denotes synaptic weight matrix between two adjacent layers, \(f(\cdot)\) is the function operation stands for convolution or fully connected, \(\mathbf{X}\) is the input, and \(\mathbf{\Theta(\cdot)}\) denotes the Heaviside step function. When the membrane potential \(\mathbf{U}\) exceeds the firing threshold \(\mathbf{V}_{th}\), the LIF neuron will trigger a spike \(\mathbf{S}\). Moreover, \(\mathbf{H}\) represents the membrane potential after the trigger event which equals to \(\tau\mathbf{U}\) if no spike is generated and otherwise equals to the reset potential \(\mathbf{V}_{reset}\). ### Gated Attention Unit (GAU) **Temporal Attention.** To establish temporal-wise relationships between SNNs' input, we first perform the squeezing step on the spatial-channel feature map of the repeated input \(\mathbf{X}\in\mathbb{R}^{T\times C\times H\times W}\), where \(T\) is the simulation time step and \(C\) is the channel size. Then we use Avgpool and Maxpool \(\in\mathbb{R}^{T\times 1\times 1\times 1}\) to calculate the maximum and average of the input in the last three dimensions. Additionally, we use a shared MLP network to turn both average-pooled and max-pooled features into a temporal weight vector \(\mathbf{M}\in\mathbb{R}^{T}\), i.e., \[\begin{split}\mathcal{F}_{T}(\mathbf{X})=&(\mathbf{W}_{m}( \mathrm{ReLU}(\mathbf{W}_{n}(\mathrm{AvgPool}(\mathbf{X}))))\\ &+\mathbf{W}_{m}(\mathrm{ReLU}(\mathbf{W}_{n}(\mathrm{MaxPool}(\mathbf{X}))) ),\end{split} \tag{2}\] where \(\mathcal{F}_{T}(\cdot)\) is the functional operation of temporal attention. And \(\mathbf{W}_{m}\in\mathbb{R}^{T\times\frac{T}{r}}\), \(\mathbf{W}_{n}\in\mathbb{R}^{\frac{T}{r}\times T}\) are the weights of two shared dense layers. Moreover, \(r\) is the temporal dimension reduction factor used to manage its computing overhead. **Spatial Channel Attention.** To generate the spatial channel dynamics for encoding result, we use a shared 2-D convolution Figure 2: The GAC-SNN framework consists of two main components: an encoder and an architecture. In (a), we introduce the encoder, i.e., the GAC module. (b) focuses on the GAU, which acts as the fundamental building block of the GAC layer. It comprises Temporal Attention, Spatial Channel Attention, and Gating sub-modules. (c) Common SNN ResNet architectures. The Conv layer in SEW-ResNet uses a multiply-accumulate operator, not spike computations. Spiking-ResNet retains its spike-driven nature via direct coding, while GAC disrupts it. More details can be seen in discussions. MS-ResNet avoids floating-point multiplications, preserving its spike-driven nature. Hence, we use the MS-ResNet to benefit from neuromorphic implementations. operation at each time step to get the spatial channel matrix \(\mathbf{N}=[\mathbf{N}^{1},\mathbf{N}^{2},\cdots,\mathbf{N}^{t}]\in\mathbb{R}^{T\times C\times H \times W}\), i.e., \[\mathcal{F}_{SC}(\mathbf{X})=\sum_{i=0}^{K-1}\sum_{j=0}^{K-1}\mathbf{W}_{i,j}\cdot\mathbf{X} _{i,j}^{t}, \tag{3}\] where \(\mathcal{F}_{SC}(\cdot)\) is the functional operation of spatial channel attention, \(\mathbf{W}_{i,j}\) is the learnable parameter and \(K\) represents the size of the 2-D convolution kernel size. **Gating.** After the above two operations, we get the temporal vector \(\mathbf{M}\) and spatial channel matrix \(\mathbf{N}\). Then to extract SNNs' input \(\mathbf{X}\) temporal-spatial-channel fused dynamics features, we first broadcast the temporal vector to \(\mathbb{R}^{T\times 1\times 1\times 1}\) and gating the above result by : \[\mathcal{F}_{G}(\mathbf{X})=\sigma(\mathbf{M}\odot\mathbf{N})=\sigma(\mathcal{F}_{T}(\mathbf{ X})\odot\mathcal{F}_{SC}(\mathbf{X})), \tag{4}\] where \(\sigma(\cdot)\) and \(\odot\) are the Sigmoid function and Hadamard Product. By the above three sub-modules, we can get the functional operation \(\mathcal{F}_{G}(\cdot)\) of GAU, which is the basic unit of the next novel coding. ### Gated Attention Coding (GAC) Compared with the previous direct coding [23, 24], we introduce a novel encoding called Gated Attention Coding (GAC). And Fig. 2 describes the specific process. Given that the input \(\mathbf{X}\in\mathbb{R}^{C\times H\times W}\) of static datasets, we assign the first layer as an encoding layer. And we first use the convolution layer to generate features, then repeat this procedure after each time step and feed the results to the LIF model and the GAU module respectively. Finally, gating the output of the above two modules. So to this end, the whole GAC process can be described as: \[\mathbf{O}=\mathcal{F}_{G}(f^{k\times k}(\mathbf{X}))\odot\mathcal{SN}(f^{k\times k} (\mathbf{X})), \tag{5}\] where \(\mathbf{X}\) and \(\mathbf{O}\) is the GAC-SNN's input and output, \(f^{k\times k}(\cdot)\) is a shared 2-D convolution operation with the filter size of \(k\times k\), and \(\mathcal{SN}(\cdot)\) is the spiking neuron model. Moreover, \(\odot\) is the Hadamard Product, and \(\mathcal{F}_{G}(\cdot)\) is the functional operation of GAU, which can fuse temporal-spatial-channel information for better encoding feature expression capabilities. ### Overall Training Framework We give the overall training algorithm of GAC for training deep SNNs from scratch with our GAC and spatio-temporal backpropagation (STBP) [23]. In the error backpropagation, we suppose the last layer as the decoding layer, and the final output \(\mathbf{K}\) can be determined by: \(\mathbf{K}=\frac{1}{T}\sum_{t=1}^{T}\mathbf{O}^{t}\), where \(\mathbf{O}^{t}\) is the SNNs' output of the last layer and \(T\) is the time steps. Then we calculate the cross-entropy loss function [11] between output and label, which can be described as: \[q_{i}=\frac{e^{k_{i}}}{\sum_{j=1}^{n}e^{k_{j}}}, \tag{6}\] \[\mathcal{L}=-\sum_{i=1}^{n}y_{i}log(q_{i}), \tag{7}\] where \(\mathbf{K}\) = \((k_{1},k_{2},\cdots,k_{n})\) and \(\mathbf{Y}\) = \((y_{1},y_{2},\cdots,y_{n})\) are the output vector and label vector. Moreover, the codes of the overall training algorithm can be found in **Supplementary Material A.** ### Theoretical Analysis To understand the highlights of our proposed method and the role of SNN encoders, we introduce the observer model for measuring SNN coding. Encoders are used to convert static images into feature sequences, incorporating temporal information into SNNs at each time step. Some encoders are embedded within the SNN as part of it (e.g., the first Conv-based spiking neuron layer for direct coding), while others are not included in the SNN models, e.g., rate coding. The embedded encoders can be easily distinguished from the rest of the network since they use actual values for linear transformations, unlike spikes or quantized data. Functionally, encoders convert static data into the temporal dimension. This definition helps us understand what SNN encoders are. **Definition 1**.: _Encoder. An encoder in SNNs for image classification tasks is used to translate static input \(\mathbf{X}\in\mathbb{R}^{C_{in}\times H\times W}\) into dynamics feature sequences \(\mathbf{A}\in\mathbb{R}^{T\times C_{out}\times H\times W}\)._ Moreover, two points should be noted in **Definition** 1. Firstly, \(\mathbf{A}\) is used to indicate the encoders' output no matter spikes or real values. And Membrane Shortcut (MS) ResNet architecture [23] is considered to use sequential real values as input after the encoder. Secondly, although two dimensions (\(C_{out},T\)) are changed, the spatial one \(\bar{C}_{out}\) is similar to the operation in ANNs, which means \(T\) is unique for SNN encoders. In other words, the time step \(T\) is the secret of the time information, and the SNN encoder is designed to generate this added dimension. The discussion above implies that to understand and metric an encoder, we should focus on its temporal dimension and find a proper granularity. **Definition 2**.: _Neuron Granularity. Considering \(\mathbf{A}\in\mathbb{R}^{T\times C_{out}\times H\times W}\) is output feature sequences of the encoder, given a fixed position \(c,h,w\) for the output \(\mathbf{A}\), so that we get a vector along temporal axis \(a=\mathbf{A}_{\cdot,c,h,w}=\left[a^{1},a^{2},\cdots,a^{t}\right]\)._ Here the encoding feature vector \(a\) is not subscripted in **Definition** 2 because the encoder is usually symmetric, and the choice of position for analysis does not affect its generality. To measure the vector \(a\), we introduce the observer model and information entropy [10]. Assuming an observer is positioned right behind the encoder, recording and predicting the elements of vector \(a\) in a time-ordered manner. Hence, the observer model can be formally established as follows: * The observer notices the elements of encoded feature sequences \(a\in\mathbb{R}^{T}\) or \(\{0,1\}^{T}\) in time step order. * At any time step \(t\), the observer remembers all elements from time \(1\) to time \(t-1\), and it is guessing the element \(a^{t}\) of encoded feature sequences \(a\in\mathbb{R}^{T}\). * The observer is aware of the mechanism or the encoder structure, but not its specific parameters. Moreover, at time step \(t\), guessing \(a^{t}\), the observer should answer it with probability. The probability can be described as: \[p^{t}(a^{t})=p(a^{t}|a^{1},\cdots,a^{t-1}), \tag{8}\] And we use the information entropy to meter the quantity of information gotten by the observer, i.e., \[\mathcal{H}(\mathbf{V}^{t})=\sum_{t}p^{t}(a^{t})\log(p^{t}(a^{t})), \tag{9}\] where \(\mathbf{V}^{t}\) is used to indicate the random variable version of \(a^{t}\). Specifically, when \(p^{t}(a^{t})=0\) or 1, \(p^{t}(a^{t})\log(p^{t}(a^{t}))=0\) it means that a deterministic event contribute \(0\) to information entropy. Moreover, for the observer model, when the element \(a^{t}\) is deterministic, there is no additional information that deserves observing at time \(t\). To better understand the concept of information entropy in this context, it is crucial to consider the role of an encoder whose task is to convert information into tensors that generate temporal dynamics. Ideally, the encoder should utilize as numerous time steps as possible to code information, resulting in a positive information entropy along the time axis. The positive entropy indicates the presence of information, which is crucial for spiking neural networks. While it is difficult to assign a precise value to the entropy, it is possible to measure the duration of positive entropy. In this way, longer-lasting positive entropy can be considered a more effective use of the temporal dimension. **Definition 3**.: _Dynamics Loss & Dynamics Duration. Considering a specific position with encoded feature vector \(a=[a^{1},a^{2},\cdots,a^{t}]\) alone temporal dimension, if there exists \(t_{e}\) for all \(t\), \(t>t_{e}\), the observer mentioned above have an entropy \(\mathcal{H}(\mathbf{V}^{t})=0\). Then we call the moment \(t\) after \(t_{e}\) is Dynamics Loss. And for \(\mathbf{T}_{e}=\inf\left(t_{e}\right)\), we call it Dynamics Duration._ **Definition 3** delineates the encoder's effective encoding range. Dynamics Duration indicates when coding entropy \(\mathcal{H}(\mathbf{V}^{t})\geq 0\). At Dynamics Loss time steps, the entropy \(\mathcal{H}(\mathbf{V}^{t})\) drops to 0, rendering encoding unnecessary. Moreover, to metric the encoder, the key is to find and compare the dynamics duration time step \(\mathbf{T}_{e}\). **Proposition 1**.: _Given same \(\{\textit{Conv-BN}\}\) parameters, denoting the dynamic duration of GAC as \(\mathbf{T}_{g}\) and direct coding's as \(\mathbf{T}_{d}\), and \(\mathbf{T}_{g}\geq\mathbf{T}_{d}\)_ Proof.: Denoted that \(\mathbf{X}\in\mathbb{R}^{T\times C\times H\times W}\) is the repetitive output after \(\{\textit{Conv-BN}\}\) module, \(a=[a^{1},a^{2},\cdots,a^{t}]\) is the encoded feature vector. For direct coding, it sends the repetitive output \(\mathbf{X}\) to the spiking neuron for coding, resulting in the encoded feature vector \(a\) being powerless and periodic with 0 or 1. Moreover, the period \(\mathbf{T}_{p}\) is: \[\mathbf{T}_{p}=\lceil\log_{\tau}(1-\frac{\mathbf{V}_{th}(1-\tau)}{x_{i,j}})\rceil, \tag{10}\] where \(\tau\) is the time constant, \(\mathbf{V}_{th}\) is the firing threshold and \(x_{i,j}\) is the pixel of the input \(\mathbf{X}\) after \(\{\textit{Conv-BN}\}\) module. Hence, the subsequent output is predictable when the observer has found the first spike. Thus, the direct coding's dynamic duration \(\mathbf{T}_{d}=\mathbf{T}_{p}\). Moreover, the derivation of \(\mathbf{T}_{p}\) and analysis of other coding schemes' dynamics can be found in **Supplementary Material B**. For GAC, we multiplied the output of direct coding and GAU to expand the dynamic duration of the encoding results. Thus, the GAC's dynamic duration \(\mathbf{T}_{g}=\lfloor\frac{\mathbf{T}}{\mathbf{T}_{d}}\rfloor\mathbf{T}_{d}\). It can be seen that \(\mathbf{T}_{g}\geq\mathbf{T}_{d}\). According to **Proposition** 1, GAC lasts its dynamics longer than direct coding. Moreover, this reflects the superiority of GAC in generating dynamic encoding results. As depicted in Fig. 3, GAC's encoding results on static datasets vary significantly at each time step, i.e., temporal dynamics. ### Theoretical Energy Consumption Calculation The GAC-SNN architecture can transform matrix multiplication into sparse addition, which can be implemented as addressable addition on neuromorphic chips. In the encoding layer, convolution operations serve as MAC operations that convert analog inputs into spikes, similar to direct coding-based SNNs [22]. Conversely, in SNN's architecture, the convolution (Conv) or fully connected (FC) layer transmits spikes and performs AC operations to accumulate weights for postsynaptic neurons. Additionally, the inference energy cost of GAC-SNN can be expressed as follows: \[\begin{split} E_{total}=E_{MAC}\cdot FL_{conv}^{1}+\\ E_{AC}\cdot T\cdot(\sum_{n=2}^{N}FL_{conv}^{n}\cdot fr^{n}+ \sum_{m=1}^{M}FL_{fc}^{m}\cdot fr^{m}),\end{split} \tag{11}\] where \(N\) and \(M\) are the total number of Conv and FC layers, \(E_{MAC}\) and \(E_{AC}\) are the energy costs of MAC and AC operations, and \(fr^{m}\), \(fr^{n}\), \(FL_{conv}^{n}\) and \(FL_{fc}^{m}\) are the firing rate and FLOPs of the \(n\)-th Conv and \(m\)-th FC layer. Previous SNN works [16, 17] assume 32-bit floating-point implementation in 45nm technology, where \(E_{MAC}\) = 4.6pJ and \(E_{AC}\) = 0.9pJ for various operations. ## Experiments In this section, we evaluate the classification performance of GAC-SNN on static datasets, e.g., CIFAR10, CIFAR100, ImageNet [15, 14]. To verify the effectiveness and efficiency of the proposed coding, we Figure 3: Visualization results. (a) Original image. (b)(c)(d)(e) Encoding results of the direct coding (top) and GAC (bottom) at different time steps. Compared to direct coding, GAC enhances dynamics by introducing variations at each time step. integrate the GAC module into the Membrane Shortcut (MS) ResNet [14], to see if the integrated architecture can generate significant improvement when compared with previous state-of-the-art works. Specifically, the details of the architecture are shown in Fig. 2-(c), and why we use it is illustrated in the discussions. More details of the training details, datasets, hyper-parameter settings, convergence analysis, and trainable parameter analysis can be found in **Supplementary Material B**. **GAC Can Produce Powerful and Dynamics Results** We evaluated GAC's effectiveness in reducing redundant temporal information and improving encoding results for static datasets. By training MS-ResNet-34 on ImageNet with and without GAC, we generated the encoding output shown in Fig. 3. And it can be seen that our GAC can help SNNs to capture more texture information. Hence, our approach enhances SNNs' representation ability and temporal dynamics by introducing significant variations in GAC results at each time step, compared to the periodic output of direct coding. **GAC Can Get Effective and Efficient SNNs** **Effectiveness.** The GAC-SNN demonstrate remarkable performance enhancement compared to existing state-of-the-art works (Tab. 1-2). On CIFAR10 and CIFAR100, GAC achieves higher accuracy than previous methods using only 2-time steps. With the same time steps, GAC improves 1.43% and 3.10% on CIFAR10 and CIFAR100 over GLIF [23]. Moreover, compared to the baseline MS-ResNet [14], our method outperforms it on CIFAR10 and CIFAR100 by 1.54% and 4.04% with 6-time steps. For the larger and more challenging ImageNet dataset, compared with the baseline MS-ResNet [14], we apply our GAC to MS-ResNet-18 and can significantly increase the accuracy (65.14% v.s. 63.10%). Compared with other advanced works [19, 23], GAC-based MS-ResNet-34 achieves 70.42% top-1 accuracy and surpasses all previous directly-trained SNNs with the same depth. **Efficiency.** Compared with prior works, the GAC-SNN shine in energy consumption (Tab. 2). We first make an intuitive comparison of energy consumption in the SNN field. Specifically, GAC-SNN (This work) vs. SEW-ResNet-34 at 4-time steps: Power, 2.20mJ vs. 4.04mJ. That is, our model has +3.22% higher accuracy than SEW-ResNet with only the previous 54.5% power. And GAC-SNN (This work) vs. MS-ResNet-34 vs. Att-MS-ResNet-34 at 6-time steps: Power, 2.34mJ vs. 4.29mJ vs. 4.11mJ. That is, Our model has the lowest power under the same structured and time steps. For instance, as the layers increase from 18 to 34, MS ResNet (baseline) has 1.83x(4.29mJ/2.34mJ) and 1.51x(5.11mJ/3.38mJ) higher energy consumption than our GAC-SNN. At the same time, our task performance on the above same depth network structures has improved by +2.04% and +0.83%, respectively. ### Ablation study **Comparison between Different SNN Coding Schemes.** To future demonstrate the advantage of GAC, we evaluate the performance of our GAC and other coding schemes e.g., Phase coding [15], Temporal coding [20], Rate coding [20], Direct coding [20]. Tab. 3 displays CIFAR10 test accuracy, where GAC achieves 96.18% top-1 with MS-ResNet-18 in 6-time steps. **The Effect of Parameter Kernel Size \(K\).** We investigate the impact of the 2D convolution kernel size \(K\) in the Spatial Channel Attention module of our GAC. Specifically, there is a trade-off between performance and latency as kernel size increases. It is almost probable that when kernel size increases, the receptive region of the local attention mechanism also does so, improving SNN performance. These benefits do, however, come at a cost of high parameters and significant latency. To this end, we trained the GAC-based MS-ResNet-18 on CIFAR100 with various \(K\) values. As shown in Fig. 4-(b), accuracy rises with increasing \(K\), plateauing after \(K\) exceeds 4. This indicates our GAC maintains strong generalization \begin{table} \begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{Methods} & \multirow{2}{*}{Architecture} & Params & Time & CIFAR10 & CIFAR100 \\ & & (M) & Steps & Acc.(\%) & Acc.(\%) \\ \hline ANN2SNN [1] & VGG-16 & 33.60/33.64 & 32 & 95.42 & 76.45 \\ tdBN [1]\({}^{AAAI}\) & Spiking-ResNet-19 & 12.63/12.67 & 6 & 93.16 & 71.12 \\ TET [1] & Spiking-ResNet-19 & 12.63/12.67 & 6 & 94.50 & 74.72 \\ GLIF [23]\({}^{NeurIPS}\) & Spiking-ResNet-19 & 12.63/12.67 & 6 & 95.03 & 77.35 \\ \hline MS-ResNet\({}^{\ddagger}\)[14] & MS-ResNet-18 & 12.50/12.54 & 6 & 94.92 & 76.41 \\ \cline{2-6} & MS-ResNet-18 & 12.63/12.67 & 6 & **96.46**\(\pm\)0.06 & **80.45**\(\pm\)0.27 \\ **GAC-SNN** & MS-ResNet-18 & 12.63/12.67 & 4 & 96.24\(\pm\)0.08 & 79.83\(\pm\)0.15 \\ & MS-ResNet-18 & 12.63/12.67 & 2 & 96.18\(\pm\)0.03 & 78.92\(\pm\)0.10 \\ \hline ANN\({}^{\ddagger}\)[14] & MS-ResNet-18 & 12.50/12.54 & N/A & 96.75 & 80.67 \\ \hline \hline \end{tabular} \end{table} Table 1: Comparison between the proposed methods and previous works on CIFAR datasets. \({}^{\ddagger}\) denote self-implementation results with open-source code [14].The "Params” column indicates network parameter size on CIFAR10/100 datasets. Figure 4: Ablation study on CIFAR100. despite large \(K\) variations. To maintain excellent performance and efficiency, we consider employing \(K=4\) in our work. **Comparison of Different Attention.** We conducted ablation studies on the Temporal Attention (TA) and Spatial Channel Attention (SCA) modules to assess their effects. Fig. 4-(a) indicates that the SCA module contributes more to performance improvement due to the most SNNs design that channels outnumber time steps. The SCA module extracts additional significant features compared to the TA module. Notably, regardless of the module we ablate, performance will be affected, which may help you understand our design. ## Discussions **Analysis of Different GAC-SNN's ResNet Architecture.** Residual connection is a crucial basic operation in deep SNNs' ResNet architecture. And there are three shortcut techniques in existing advanced deep SNNs. Spiking-ResNet [12] performs a shortcut between membrane potential and spike. Spike-Element-Wise (SEW) ResNet [13] employs a shortcut to connect the output spikes in different layers. Membrane Shortcut (MS) ResNet [12], creating a shortcut between membrane potential of spiking neurons in various layers. Specifically, we leverage the membrane shortcut in the proposed GAC for this reason: _Spike-driven_ describes the capacity of the SNNs' architecture to convert matrix multiplication (i.e., high-power Multiply-Accumulation) between weight and spike tensors into sparse addition (i.e., low-power Accumulation). The spike-driven operations can only be supported by binary spikes. However, as the SEW shortcut creates the addition between binary spikes, the values in the spike tensors are multi-bit (integer) spikes. Additionally, GAC-based Spiking-ResNet is not entirely spike-driven. Because the second layer convolution operation's input changes to a floating-point number when using GAC, the input for the other layers remains a spike tensor. By contrast, as shown in Fig. 2-(c), spiking neurons are followed by the MS shortcut. Hence, each convolution layer in GAC-based MS-ResNet architecture will always get a sparse binary spike tensor as its input. **Impact of GAC and Other SNNs' Attention Methods on the Spike-driven Nature.** As shown in Fig. 1-(b), other SNN-oriented attention works [23, 20] adding an attention mechanism to the SNNs architecture need to design numerous multiplication blocks and prevent all matrix multiplications related to the spike matrix from being converted into sparse additions, which hinders the implementation of neuromorphic hardware. However, adding an attention mechanism in the encoder doesn't. As the encoder and architecture are decoupled in SNN hardware design [10], our GAC, like direct coding [12], incorporates the multiplication block for analog-to-spike conversion in the encoder without impacting the spike-driven traits of the sparse addition SNN architecture. ## Conclusion This paper focuses on the SNNs' coding problem, which is described as the inability of direct coding to produce powerful and temporal dynamic outputs. We have observed that this issue manifests as periodic powerless spike representation due to repetitive operations in direct coding. To tackle this issue, we propose Gated Attention Coding (GAC), a spike-driven and neuromorphic hardware-friendly solution that seamlessly integrates with existing Conv-based SNNs. GAC incorporates a multi-dimensional attention mechanism inspired by atten \begin{table} \begin{tabular}{c c c c c c c} \hline \hline Methods & Architecture & Spike -driven & Params (M) & Time Steps & Power (mJ) & Top-1 Acc.(\%) \\ \hline ANN2SNN [12]\({}^{AAAI}\) & ResNet-34 & ✓ & 21.79 & 64 & - & 68.61 \\ TET [13]\({}^{ICLR}\) & Spiking-ResNet-34 & ✓ & 21.79 & 6 & - & 64.79 \\ tdBN [13]\({}^{AAAI}\) & Spiking-ResNet-34 & ✓ & 21.79 & 6 & 6.39 & 63.72 \\ SEW-ResNet [13]\({}^{NeurIPS}\) & SEW-ResNet-34 & ✗ & 21.79 & 4 & 4.04 & 67.04 \\ \hline MS-ResNet [12] & MS-ResNet-18 & ✓ & 11.69 & 6 & 4.29 & 63.10 \\ & MS-ResNet-34 & ✓ & 21.80 & 6 & 5.11 & 69.43 \\ \cline{2-6} Att-MS-ResNet [23]\({}^{TPAMI}\) & Att-MS-ResNet-18 & ✗ & 11.96(+0.27) & 6 & 4.11 & 64.15\({}^{*}\) \\ Att-MS-ResNet-34 & ✗ & 22.30(+0.50) & 6 & 5.05 & 69.35\({}^{*}\) \\ \cline{2-6} **GAC-SNN** & MS-ResNet-18 & ✓ & 11.82(**+0.13**) & 6/4 & 2.34/**1.49** & **65.14**/64.05 \\ & MS-ResNet-34 & ✓ & 21.93(**+0.13**) & 6/4 & 3.38/**2.20** & **70.42**/69.77 \\ \hline ANN [12] & MS-ResNet-18 & ✗ & 11.69 & N/A & 14.26 & 69.76 \\ & MS-ResNet-34 & ✗ & 21.80 & N/A & 16.87 & 73.30 \\ \hline \hline \end{tabular} * needs a large training time (1000 epochs and 600 batch size) compared to other methods. \end{table} Table 2: Comparison between the proposed method and previous works on the ImageNet dataset. Power is the average theoretical energy consumption when predicting a batch of images from the test set, details of which are shown in Eq.5. The "Spike-driven" column indicates if an independent design of the multiplication module is required in the SNNs architecture, which hinders the implementation of neuromorphic hardware. \begin{table} \begin{tabular}{c c c c} \hline \hline Architecture & Schemes & Time Steps & Acc.(\%) \\ \hline ResNet-19 & Phase Coding & 8 & 91.40 \\ VGG-16 & Temporal Coding & 100 & 92.68 \\ ResNet-19 & Rate Coding & 6 & 93.16 \\ MS-ResNet-18 & Direct Coding & 6 & 94.92 \\ \hline MS-ResNet-18 & GAC & 6 & **96.46\(\pm\)**0.06 \\ \hline \hline \end{tabular} \end{table} Table 3: Comparisons with different coding schemes. tion mechanism and human dynamics vision in neuroscience. By effectively establishing sptiao-temporal relationships at each moment, GAC acts as a preprocessing layer and efficiently encodes static images into powerful representations with temporal dynamics while minimizing redundancy. Our method has been extensively evaluated through experiments, demonstrating its effectiveness with state-of-the-art results: CIFAR10 (96.46%), CIFAR100 (80.45%), and ImageNet (70.42%). We hope our investigations pave the way for more advanced coding schemes and inspire the design of high-performance and efficient spike-driven SNNs. ## Supplementary Material B In this supplementary material B, we first present the hyperparameters settings and training details. Then we present the details of our architecture, training algorithm, and the analysis of the GAC's firing rate, the period of direct coding [21], and other coding schemes' dynamics. Moreover, we give additional results to depict the GAC's advantage. ## Hyper-Parameters Settings In this section, we give the specific hyperparameters of the LIF model and training settings in all experiments, as depicted in Tab. 4 and Tab.5. ## Training Details For all of our experiments, we use the stochastic gradient descent optimizer with 0.9 momentum and weight decay (e.g., CIFAR10 \(5e-4\) and ImageNet \(1e-4\) for 250 training epochs. The learning rate is set to 0.1 and cosine decay is to 0. All the training and testing codes are implemented using PyTorch [22]. The training is performed on eight NVIDIA V100 GPUs and 32GB memory for each task. The summaries of datasets and augmentation involved in the experiment are listed below. **CIFAR 10/100** consist of 50k training images and 10k testing images with the size of 32 x 32 [16]. We use ResNet-18 for both CIFAR10 and CIFAR100. Random horizontal flips, crops, and Cutmix are applied to the training images for augmentation. **ImageNet** contains around 1.3 million 1, 000-class images for training and 50, 000 images for validation [17]. The batch size and total epochs are set to 256. We crop the images to 224x224 and use the standard augmentation for the training data, which is similar to the Att-MS-ResNet[23]. Moreover, we use the label smoothing (7) to avoid gradient vanishing, which is similar to the Att-MS-ResNet [23]. ## Architecture Details The MS-ResNet-series architecture is the primary network architecture used in this work. There are numerous ResNet variations utilized in the field of SNNs, such as MS-ResNet-18, and MS-ResNet-34. They are also used interchangeably in existing literature [11, 24]. To avoid confusion in various ResNet architectures, we will sort out the architecture details in this part. Specifically, MS-ResNet-18 is originally designed for the ImageNet dataset in [11]. To process CIFAR dataset, we remove the max-pooling layer, replace the first 7 \(\times\) 7 convolution layer with a 3 \(\times\) 3 convolution layer, and replace the first and the second 2-stride convolution operations as 1-stride, following the modification philosophy from TET [4]. ## Analysis of the Firing Rate GAC controls membrane potentials and reduces spike firing rates of the post-encoding layer in GAC-SNN. It achieves by transforming matrix multiplication into sparse addition, which can be implemented as addressable addition on neuromorphic chips. Moreover, high sparsity is observed in GAC-based MS-ResNet, as shown in Fig. 5. The firing rate \(r\) is defined as the firing probability of each neuron per timestep and can be estimated by: \[r=\frac{\#\text{Spike}}{\#\text{Neuron}\cdot T}, \tag{12}\] Where #Spike denotes the number of spikes during \(T\) timesteps and #Neuron denotes the number of neurons in the network. Specifically, after applying the GAC technique, the average spike firing rate of MS-ResNet-18 decreased from **0.215** to **0.154**. Similarly, the release rate of MS-ResNet-34 \begin{table} \begin{tabular}{c|c|c|c|c} \hline Stage & Output Size & ResNet-18 & ResNet-34 \\ \hline Conv1 & 112x112 & \multicolumn{2}{c|}{7x, 64, stride=2} \\ \hline Conv2 & 56x56 & \begin{tabular}{c} [3x3, 64 \\ 3x3, 64 \\ \end{tabular} & \(*2\) & \begin{tabular}{c} [3x3, 64 \\ 3x3, 64 \\ \end{tabular} & \(*2\) \\ \hline Conv3 & 28x28 & \begin{tabular}{c} [3x3, 128 \\ 3x3, 128 \\ \end{tabular} & \(*2\) & \begin{tabular}{c} [3x3, 128 \\ \end{tabular} \\ \hline Conv4 & 14x14 & \begin{tabular}{c} [3x3, 256 \\ 3x3, 256 \\ \end{tabular} & \(*2\) & \begin{tabular}{c} [3x3, 256 \\ \end{tabular} \\ \hline Conv5 & 7x7 & \begin{tabular}{c} [3x3, 512 \\ \end{tabular} & \(*2\) & \begin{tabular}{c} [3x3, 512 \\ \end{tabular} \\ \hline FC & 1x1 & \multicolumn{2}{c}{AveragePool, FC-1000} \\ \hline \end{tabular} \end{table} Table 6: MS-ResNet-series architecture for ImageNet. \begin{table} \begin{tabular}{c c} \hline Parameter & Value \\ \hline Threshold \(\mathbf{V}_{th}\) & 0.5 \\ Reset potential \(\mathbf{V}_{reset}\) & 0 \\ Decay factor \(\tau\) & 0.25 \\ Surrogate function’s window size \(a\) & 1 \\ \hline \end{tabular} \end{table} Table 4: Hyper-parameter setting on LIF model. \begin{table} \begin{tabular}{c c c c} \hline Parameter & CIFAR10 & CIFAR10 & ImageNet \\ \hline Learning Rate & \(5e-4\) & \(5e-4\) & \(1e-4\) \\ Batch Size & 64 & 64 & 256 \\ Time steps & 6/4/2 & 6/4/2 & 6/4 \\ Training Epochs & 250 & 250 & 250 \\ \hline \end{tabular} \end{table} Table 5: Hyper-parameter training settings of GAC-SNNs. reduced from **0.225** to **0.1729**. These reductions indicate that GAC has effectively contributed to reducing the spike firing rates in both MS-ResNet-34 and MS-ResNet-104 models. Lower spike firing rates can be beneficial in terms of reducing computational load, and energy consumption, and potentially improving the overall efficiency of the models. **Derivation of the direct coding period \(\mathbf{T}_{p}\)** For direct coding, the real value data repeatedly pass through the {_Conv-BN_} layer for linear transformations and then the LIF model for \(\{0,1\}\) encoding results. Since the input of the LIF model is the same for every time step, it is trivial that the output of the LIF model in the encoding layer is periodical and the information of input (real value) is encoded into the period \(\mathbf{T}_{p}\). Suppose the reset value is \(0\) and the threshold is \(\mathbf{V}_{th}\), the time \(\mathbf{T}_{p}\) is expressed as follow: \[\mathbf{T}_{p}=\lceil\log_{\tau}(1-\frac{\mathbf{V}_{th}(1-\tau)}{x_{i,j}})\rceil, \tag{13}\] Here \(\tau\) is the attenuation factor and \(x_{i,j}\) is the pixel of the input image of the LIF model after the {_Conv-BN_} module. \(\mathbf{T}_{p}\) is the first time step of firing and the period of firing for the corresponding LIF model. It is trivial that \(T\) increases monotonically with \(x\) and the resolution of the encoding depends on the value of \(x\). The most important thing is that after the LIF model, the direct coding is period coding, making the \(0,1\) output periodical. Moreover, the derivation of the direct coding's period is as follows: Suppose when \(t=0\), the membrane potential \(\mathbf{U}^{t}=0\), and the neuron fires at time step \(t=\mathbf{T}_{p}\). According to the direct coding, the input of any specific neuron is \(x_{i,j}\) (a constant). Since the threshold and the attenuation factor are separately denoted as \(\mathbf{V}_{th}\) and \(\tau\).According to the iterative formula of the LIF model before the spike fire time, we have: \[\mathbf{U}^{t}=\tau\mathbf{U}^{t-1}+x_{i,j}=\sum_{k=0}^{t-1}\tau^{k}x_{i,j}=x_{i,j} \sum_{k=0}^{t-1}\tau, \tag{14}\] Also, we have: \[\mathbf{U}^{T_{p}-1}\leq\mathbf{V}_{th}\leq\mathbf{U}^{T_{p}}, \tag{15}\] Bringing 14 to 15, we have: \[x_{i,j}\sum_{k=0}^{T_{p}-2}\tau\leq\mathbf{V}_{th}\leq x_{i,j}\sum_{ k=0}^{T_{p}-1}, \tag{16}\] \[(1-\tau^{T_{p}-1})\frac{x}{1-\tau}\leq\mathbf{V}_{th}\leq(1-\tau^{T_{ p}})\frac{x_{i,j}}{1-\tau},\] (17) \[\tau^{T_{p}}\leq 1-\frac{\mathbf{V}_{th}(1-\tau)}{x_{i,j}}\leq\tau^{T_{ p}-1},\] (18) \[T_{p}-1\leq\log_{\tau}(1-\frac{\mathbf{V}_{th}(1-\tau)}{x_{i,j}}) \leq T_{p},\] (19) \[T_{p}=\lceil\log_{\tau}(1-\frac{\mathbf{V}_{th}(1-\tau)}{x_{i,j}})\rceil, \tag{20}\] ### Analysis of other Coding Schemes' Dynamics In this section, we use the observer model to analyze the dynamics of common coding schemes such as rate coding [20] and coding scheme used in MS-ResNet [14]. Hence we give the following two propositions to describe, i.e., **Proposition 2**.: _For rate coding with positive value \(x\in(0,1)\), giving the total timestep \(\mathbf{T}\), denoting dynamics duration of rate coding as \(\mathbf{T}_{r}\), then \(\mathbf{T}_{r}=\mathbf{T}\)._ Proof.: For rate coding, the output in each timestep follows the Bernoulli distribution of \(n=1,p=x\), and is independent of each other at different time steps (that is, independent and identically distributed). Therefore, for any timestep \(t\), information entropy \(H(\mathbf{V}^{t})=x\log(x)+(1-x)\log(1-x)>0\). This means that all-time step encoding information is dynamic. **Proposition 3**.: _For MS coding, denoting dynamics duration of rate coding as \(\mathbf{T}_{m}\), then \(\mathbf{T}_{m}=\mathbf{1}\)._ Proof.: For MS coding, the information for each timestep is repeated. Thus, it is predictable after the first time of observation. Therefore, \(T_{m}=1\). According to **Proposition**1, it is known that rate coding has all-time step dynamics. However, it suffers from long time steps to remain high performance, while small time steps result in lower representation resolution. According to **Proposition**2, MS coding results are the same at every time step. And our GAC-based MS-ResNet improves temporal dynamics and encoding efficiency through an attention mechanism. ## Training Algorithm to Fit Target Output In this section, we introduce the training process of SNN gradient descent and the parameter update method of STBP [14]. SNNs' parameters can be taught using gradient descent techniques, just like ANNs, after determining the derivative of the spike generation process. Classification, as well as other tasks for both ANNs and SNNs, can be thought of as optimizing network parameters to meet a goal output when given a certain input. Moreover, the accumulated gradients of loss \(\mathcal{L}\) with respect to weights \(\mathbf{W}_{n}^{j}\) at layer \(n\) can be calculated as: Figure 5: Firing rate advantage on the ImageNet dataset. \[\begin{cases}\frac{\partial\mathcal{C}}{\partial\mathcal{S}_{i}^{t,n}}=\sum_{j} \frac{\partial\mathcal{C}}{\partial U_{j}^{t,n+1}}\frac{\partial U_{j}^{t,n+1} }{\partial\mathcal{S}_{i}^{t,n}}+\frac{\partial\mathcal{C}}{\partial U_{i}^{t +1,n}}\frac{\partial U_{j}^{t+1,n}}{\partial\mathcal{S}_{i}^{t,n}}\\ \frac{\partial\mathcal{C}}{\partial U_{i}^{t,n}}=\frac{\partial\mathcal{C}}{ \partial\mathcal{S}_{i}^{t,n}}\frac{\partial\mathcal{S}_{j}^{t,n}}{\partial U _{i}^{t,n}}+\frac{\partial\mathcal{C}}{\partial U_{i}^{t+1,n}}\frac{\partial U _{j}^{t+1,n}}{\partial U_{j}^{t,n}}\\ \frac{\partial\mathcal{C}}{\partial\mathcal{W}_{n}^{t}}=\sum_{t=1}^{T}\frac{ \partial\mathcal{C}}{\partial U_{i}^{t,n+1}}\mathcal{S}_{j}^{t,n},\end{cases} \tag{21}\] where \(\mathcal{S}^{t,n}\) and \(\mathcal{U}_{j}^{t,n}\) represent the binary spike and membrane potential of the neuron in layer \(n\), at time \(t\). Moreover, notice that \(\frac{\partial\mathcal{S}^{t,n}}{\partial\mathcal{U}^{t,n}}\) is non-differentiable. To overcome this problem, Wu et al. (2018) proposes the surrogate function to make only the neurons whose membrane potentials close to the firing threshold receive nonzero gradients during backpropagation. In this paper, we use the rectangle function, which has been shown to be effective in gradient descent and may be calculated by: \[\frac{\partial\mathcal{S}^{t,n}}{\partial\mathcal{U}^{t,n}}=\frac{1}{a}\operatorname {sign}\left(\left|\mathcal{U}^{t,n}-\boldsymbol{V}_{\mathrm{th}}\right|<\frac{ a}{2}\right), \tag{22}\] where \(a\) is a defined coefficient for controlling the width of the gradient window. ## Additional Results ### Convergence Analysis We empirically demonstrate the convergence of our proposed method. As shown in Fig. 6, the performance of our GAC-based MS-ResNet stabilizes and converges to a higher level compared to MS-ResNet as training epochs increase. Moreover, the GAC-SNN achieves state-of-the-art performance after only 20 epochs, demonstrating its efficacy. ### Trainable Parameter Analysis We also give the effects of the Temporal Attention (TA), Spatial Channel Attention (SCA), and Gated Attention Unit (GAU) on the increase of model parameters, as shown in Tab. 7. First, with different datasets, the proportion of different modules to the number of model parameters varies. The increase of SCA and GAU to the number of parameters is much higher than that of TA. This phenomenon is consistent with our expectations that given that the channel size in the dataset is much larger than the simulation time step, indicating that SCA has a lot of parameters. ### More Visualization Results on GAC To further illustrate the advantages of our GAC, we provide the visualization results (i.e., model's GradCAM (Selvaraju et al., 2017)) on MS-ResNet-34 with GAC or direct coding, as shown in Fig. 7 to help understand our design. And it can be seen in GradCAM heat map results that our GAC can help SNNs to capture more texture information.
2302.08286
Theory and Implementation of Complex-Valued Neural Networks
This work explains in detail the theory behind Complex-Valued Neural Network (CVNN), including Wirtinger calculus, complex backpropagation, and basic modules such as complex layers, complex activation functions, or complex weight initialization. We also show the impact of not adapting the weight initialization correctly to the complex domain. This work presents a strong focus on the implementation of such modules on Python using cvnn toolbox. We also perform simulations on real-valued data, casting to the complex domain by means of the Hilbert Transform, and verifying the potential interest of CVNN even for non-complex data.
Jose Agustin Barrachina, Chengfang Ren, Gilles Vieillard, Christele Morisseau, Jean-Philippe Ovarlez
2023-02-16T13:31:10Z
http://arxiv.org/abs/2302.08286v1
# Theory and implementation of Complex-Valued Neural Networks + ###### Abstract This work explains in detail the theory behind Complex-Valued Neural Network (CVNN), including Wirtinger calculus, complex backpropagation and basic modules such as complex layers, complex activation functions or complex weight initialization. We also show the impact of iot adapting the weight initialization correctly to the complex domain. This work presents a strong focus on the implementation of such modules on Python using cvnn toolbox. We also perform simulations on real-valued data, casting to the complex domain by means of the Hilbert Transform and verify the potential interest of CVNN even for non-complex data. CVNN Deep Learning Complex values Machine Learning ## 1 Introduction Although CVNN has been investigated for particular structures of complex-valued data [1, 2, 3, 4], the difficulties in implementing CVNN models in practice have slowed down the field from growing further [5]. Indeed, the two most popular Python libraries for developing deep neural networks, which are Pytorch, from Meta (formerly named Facebook) and Tensorflow from Google do not fully support the creation of complex-valued models. Even though _Tensorflow_ does not fully support the implementation of a CVNN, it has one significant benefit: It enables the use of complex-valued data types for the automatic differentiation (autodiff) algorithm [6] to calculate the complex gradients as defined in Appendix C. Since July 2020, _PyTorch_ also added this functionality as BETA with the version 1.6 release. later on, on June 2022, after the release of version 1.12, _PyTorch_ extended its complex functionality by adding complex convolutions (also as BETA). Although this indicates a clear intention to develop towards CVNN support, there is still a lot of development to be done. Libraries to develop CVNNs do exist, the most important one of them being probably the code published in [7]. However, we have decided not to use this library since the latter uses _Theano_ as a back-end, which is no longer maintained. Another code was published on GitHub that uses _Tensorflow_ and _Keras_ to implement CVNN [8]. However, as _Keras_ does not support complex-valued numbers, the published code simulates complex operations using real-valued datatypes. Therefore, the user has to transform its complex-valued data into a real-valued equivalent before using this library. The same happened with ComplexPyTorch [9] until it was updated in January 2021 to support complex tensors. During this thesis, we created a _Python_ tool to deal with the implementation of CVNN models using _Tensorflow_ as back-end. Note that the development of this library started in 2019, whereas _PyTorch_ support for complex numbers started in mid-2020, which is the reason why the decision to use _Tensorflow_ instead of _Pytorch_ was made. To the author's knowledge, this was the first library that natively supported complex-number data types. The library is called CVNN and was published [10] using CERN's Zenodo platform which received already 18 downloads. It can be installed using both Python Index Package (PIP) (Figure 0(a)) and Anaconda. The latter already has 193 downloads as of the 8th October 2022, as shown in Figure 0(b), from which none of those downloads were ourselves. The library was open-sourced for the community on GitHub [11] and has received a very positive reception from the community. As can be seen from Figure 2, the GitHub repository received an average of 2 clones and almost 50 visits per day in the last two weeks. With 62 stars at the beginning of October 2022. It also has a total of 16 forks and one pull request for a new feature that has already been reviewed and accepted. Six users are actively watching every update on the code as they have activated the notifications. Finally, two users have codes in GitHub importing the library. Thirty issues have been reported, and it was also subject to 31 private emails. All these metrics are evidence of the impact and interest of the community in the published code. The library was documented using _reStructuredText_ and uploaded for worldwide availability. The link for the full documentation (displayed in Figure 2(a)) can be found in the following link: complex-valued-neural-networks.rtfd.io. The documentation received a daily view which varied from a minimum of 18 views on one day to a maximum of 173 views as shown in Figure 2(b) The _Python_ testing framework Pytest was used to maintain a maximum degree of quality and keep it as bug-free as possible. Before each new feature implementation, a test module was created to assert the expected behavior of the feature and reduce bugs to a minimum. This methodology also guaranteed feature compatibility as all designed test modules must pass in order to deploy the code. The library allows the implementation of a Real-Valued Neural Network (RVNN) as well with the intention of changing as little as possible the code when using complex data types. This made it possible to perform a straight comparison against _Tensorflow_'s models, which helped in the debugging. Indeed, some _Pytest_ modules achieved the same result that _Tensorflow_ when initializing the models with the same seed. Special effort was made on the User eXperience (UX) by keeping the Application Programming Interface (API) as similar as possible to that of _Tensorflow_. The following code extract would work both for a _Tensorflow_ or cvnn application code: ``` 1importnumpyasnp 2importtensorflowastf 3 4#Getsthedataset,whenusingcvnnyounormallywantthistobecomplex 5#foreaxamplenumpyarraysoftdtypepnp.complex64 6#tobedonebyeachuser 7{train_images,train_labels},(test_images,test_labels)=get_dataset() 8 9#Thisfunctionreturnastf.Modelobject 10model=get_model() 11 12#CompileasanyTensorFlowmodel 13model.compile(optimizer='adam',metrics=['accuracy'],loss=tf.keras.losses. SparseCategoricalCrossentropy(from_logits=True)) ``` Figure 2: GitHub cvnn toolbox traffic. Figure 3: Documentation hosted by _Read the Docs_. * ``` *model.summary() [MISSING_PAGE_POST] * ``` * * ``` * * ``` * * ``` * * ``` * * ``` * * ``` * * ``` * * ``` * * ``` * * ``` * * ``` * * ``` * * * ``` * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * ### Liouville Theorem Liouville graduated from the Ecole Polytechnique in 1827. After some years as an assistant at various institutions including the Ecole Centrale Paris, he was appointed as a professor at the Ecole Polytechnique in 1838. **Definition 2.1**.: In complex analysis, an _entire function_, also called an _integral function_, is a complex-valued function that is _holomorphic_ at all finite points over the whole complex plane. **Definition 2.2**.: Given a function \(f\), the function is _bounded_ if \(\exists M\in\mathbb{R}^{+}:|f(z)|<M\). **Theorem 2.1** (Cauchy integral theorem).: Given \(f\) analytic through region **D**, then the contour integral of \(f(z)\) along any close path \(C\) inside region **D** is zero: \[\oint_{C}f(z)dz=0\,. \tag{1}\] **Theorem 2.2** (Cauchy integral formula).: Given \(f\) analytic on the boundary \(C\) with \(z_{0}\) any point inside \(C\), then: \[f(z_{0})=\frac{1}{2\pi i}\oint_{C}\frac{f(z)}{z-z_{0}}\,dz\,, \tag{2}\] where the contour integration is taken in the counterclockwise direction. **Corollary 2.2.1**.: \[f^{(n)}(z_{0})=\frac{n!}{2\pi i}\oint_{C}\frac{f(z)}{(z-z_{0})^{n+1}}\,dz\,.\] (3) In complex analysis, Liouville's theorem states that every _bounded entire function_ must be constant. That is: **Theorem 2.3** (Liouville's Theorem).: \[f:\mathbb{C}\longrightarrow\mathbb{C}\text{ holomorphic }/\exists M\in\mathbb{R}^{+}:|f(z)|<M,\forall z\in\mathbb{C} \Rightarrow f=c,c\in\mathbb{C}\,.\] (4) Equivalently, non-constant _holomorphic_ functions on \(\mathbb{C}\) have unbounded images. Proof.: Because every _holomorphic_ function is analytic (Class \(C^{\infty}\)), that is, it can be represented as a Taylor series, we can therefore write it as: \[f(z)=\sum_{k=0}^{\infty}a_{k}\,z^{k}\,. \tag{5}\] As \(f\) is _holomorphic_ in the open region enclosed by the path and continuous on its closure. Because of 2.2.1 we have: \[a_{k}=\frac{f^{(k)}(0)}{k!}=\frac{1}{2\pi i}\oint_{C_{r}}\frac{f(z)}{z^{k+1} dz}\,, \tag{6}\] where \(C_{r}\) is a circle of radius \(r>0\). Because \(f\) is bounded: \[|a_{k}|\leqslant\frac{1}{2\pi}\oint_{C_{r}}\frac{|f(z)|}{|z|^{k+1 }}\,|dz|\leqslant\frac{1}{2\pi}\oint_{C_{r}}\frac{M}{r^{k+1}}\,|dz| =\frac{M}{2\pi r^{k+1}}\oint_{C_{r}}|dz|\, \tag{7}\] \[=\frac{M}{2\pi r^{k+1}}2\pi\,,\] (8) \[=\frac{M}{r^{k}}\,. \tag{9}\] The latest derivation is also known as _Cauchy's inequality_. As \(r\) is any positive real number, by taking the \(r\to\infty\) then \(a_{k}\to 0\) for all \(k\neq 0\). Therefore from Equation 5 we have that \(f(z)=a_{0}\). ### Wirtinger Calculus Wirtinger calculus, named after Wilhelm Wirtinger (1927) [13], generalizes the notion of complex derivative, and the _holomorphic_ functions become a special case only. Further reading on Wirtinger calculus can be found in [14, 15]. **Theorem 2.4** (Wirtinger Calculus).: Given a complex function \(f(z)\) of a complex variable \(z=x+i\,y\in\mathbb{C},x,y\in\mathbb{R}\). The partial derivatives with respect to \(z\) and \(\overline{z}\) respectively are defined as: \[\begin{split}\frac{\partial f}{\partial z}\triangleq\frac{1}{2} \left(\frac{\partial f}{\partial x}-i\,\frac{\partial f}{\partial y}\right), \\ \frac{\partial f}{\partial\overline{z}}\triangleq\frac{1}{2} \left(\frac{\partial f}{\partial x}+i\,\frac{\partial f}{\partial y}\right). \end{split} \tag{10}\] These derivatives are called \(\mathbb{R}\)-derivative and conjugate \(\mathbb{R}\)-derivative, respectively. As said before, the _holomorphic_ case is only a special case where the function can be considered as \(f(z,\overline{z}),\overline{z}=0\). Wirtinger calculus enables to work with _non-holomorphic_ functions, providing an alternative method for computing the gradient that also improves the stability of the training process. Proof.: Defining \(z=x+i\,y\) one can also define \(x(z,\overline{z}),y(z,\overline{z}):\mathbb{C}\longrightarrow\mathbb{R}\) as follows: \[\begin{split} x=\frac{1}{2}\left(z+\overline{z}\right),\\ y=\frac{1}{2\,i}\left(z-\overline{z}\right).\end{split} \tag{11}\] Using (A.4): \[\begin{split}\frac{\partial f}{\partial z}&=\frac{ \partial f}{\partial x}\frac{\partial x}{\partial z}+\frac{\partial f}{ \partial y}\frac{\partial y}{\partial z},\\ &=\frac{\partial f}{\partial x}\frac{1}{2}+\frac{\partial f}{ \partial y}\left(\frac{-i}{2}\right),\\ &=\frac{1}{2}\left(\frac{\partial f}{\partial x}-i\,\frac{ \partial f}{\partial y}\right).\end{split} \tag{12}\] **Corollary 2.4.1**.: \[\begin{split}\frac{\partial f}{\partial g}+\frac{\partial f}{ \partial\overline{g}}&=\frac{1}{2}\left(\frac{\partial f}{ \partial g_{\mathrm{Re}}}-i\,\frac{\partial f}{\partial g_{\mathrm{Im}}} \right)+\frac{1}{2}\left(\frac{\partial f}{\partial g_{\mathrm{Re}}}+i\, \frac{\partial f}{\partial g_{\mathrm{Im}}}\right),\\ &=\frac{\partial f}{\partial g_{\mathrm{Re}}}\\ i\left(\frac{\partial f}{\partial g}-\frac{\partial f}{ \partial\overline{g}}\right)&=\frac{i}{2}\left(\frac{\partial f}{ \partial g_{\mathrm{Re}}}-i\,\frac{\partial f}{\partial g_{\mathrm{Im}}} \right)-\frac{i}{2}\left(\frac{\partial f}{\partial g_{\mathrm{Re}}}+i\, \frac{\partial f}{\partial g_{\mathrm{Im}}}\right),\\ &=\frac{i}{2}\left(-2i\,\frac{\partial f}{\partial g_{\mathrm{Im }}}\right),\\ &=\frac{\partial f}{\partial g_{\mathrm{Im}}}\,,\end{split}\] (13) where \(g_{\mathrm{Re}}\) and \(g_{\mathrm{Im}}\) are the real and imaginary part of \(g\) respectively. **Theorem 2.5**.: Given \(f:\mathbb{C}\longrightarrow\mathbb{C}\)_holomorphic_ with \(f(x+iy)=u(x,y)+iv(x,y)\) where \(u,v:\mathbb{R}\longrightarrow\mathbb{R}\) real-differentiable functions. Then \[\frac{\partial f}{\partial\overline{z}}=0\,.\] Proof.: Using Wirtinger calculus (Section 2.2). \[\frac{\partial f}{\partial\overline{z}}=\frac{1}{2}\left(\frac{\partial f}{ \partial x}+i\,\frac{\partial f}{\partial y}\right). \tag{14}\] By definition, then: \[\begin{split}\frac{\partial f}{\partial\overline{z}}& =\frac{1}{2}\left(\frac{\partial f}{\partial x}+i\,\frac{\partial f }{\partial y}\right),\\ &=\frac{1}{2}\left(\left(\frac{\partial u}{\partial x}+i\,\frac{ \partial v}{\partial x}\right)+i\,\left(\frac{\partial u}{\partial y}+i\, \frac{\partial v}{\partial y}\right)\right),\\ &=\frac{1}{2}\left(\left(\frac{\partial u}{\partial x}-\frac{ \partial v}{\partial y}\right)+i\,\left(\frac{\partial v}{\partial x}+\frac{ \partial u}{\partial y}\right)\right)\,.\end{split} \tag{15}\] Because \(f\) is _holomorphic_ then the Cauchy-Riemann equations (Theorem A.2) applies making Equation 15 equal to zero. Even though so far we have always talked about general chain rule definitions. Here we will demonstrate a particularly interesting chain rule used when working with neural networks. For this optimization technique, the cost function to optimize is always real, even if its variables are not. Therefore, the following chain rule will have an application interest. **Theorem 2.6** (complex chain rule with real output).: Given \(f:\mathbb{C}\to\mathbb{R}\), \(g:\mathbb{C}\to\mathbb{C}\) with \(g(z)=r(z)+i\,s(z)\), \(z=x+i\,y\in\mathbb{C}\): \[\frac{\partial f}{\partial z}=\frac{\partial f}{\partial r}\frac{\partial r}{ \partial z}+\frac{\partial f}{\partial s}\frac{\partial s}{\partial z}\,. \tag{16}\] Proof.: For this proof, we will assume we are already working with Wirtinger Calculus for the partial derivative definition. \[\begin{split}\frac{\partial f}{\partial z}&=\frac{ \partial f}{\partial g}\frac{\partial g}{\partial z}+\frac{\partial f}{ \partial\overline{g}}\frac{\partial\overline{g}}{\partial z}\,,\\ &=\frac{1}{4}\left(\frac{\partial f}{\partial r}-i\,\frac{ \partial f}{\partial s}\right)\left(\frac{\partial g}{\partial x}-i\,\frac{ \partial g}{\partial y}\right)+\frac{1}{4}\left(\frac{\partial f}{\partial r }+i\,\frac{\partial f}{\partial s}\right)\overline{\left(\frac{\partial g}{ \partial x}+i\,\frac{\partial g}{\partial y}\right)}\,,\\ &=\frac{1}{4}\left(\frac{\partial f}{\partial r}-i\,\frac{ \partial f}{\partial s}\right)\left[\left(\frac{\partial r}{\partial x}+i\, \frac{\partial s}{\partial x}\right)-i\,\left(\frac{\partial r}{\partial y}+i \,\frac{\partial s}{\partial y}\right)\right]+\cdots\\ &\cdots+\frac{1}{4}\left(\frac{\partial f}{\partial r}+i\,\frac{ \partial f}{\partial s}\right)\left[\left(\frac{\partial r}{\partial x}+i\, \frac{\partial s}{\partial x}\right)+i\,\left(\frac{\partial r}{\partial y}+i \,\frac{\partial s}{\partial y}\right)\right].\end{split} \tag{17}\] ### Notation A MLP can be represented generically by Figure 4. For that given multi-layered neural network, we define the following variables, which will be used in the following subsections and throughout our work: * \(0\leq l\leq L\) corresponds to the layer index where \(L\) is the output layer index and 0 is the input layer index. * \(1\leq n\leq N_{l}\) the neuron index, where \(N_{l}\) denotes the number of neurons of layer \(l\). * \(\omega_{nm}^{(l)}\) weight of the \(n^{th}\) neuron of layer \(l-1\) with the \(m^{th}\) neuron of layer \(l\). * \(\sigma\) activation function. * \(X_{n}^{(l)}=\sigma\left(V_{n}^{(l)}\right)\) considered the output of layer \(l\) and input of layer \(l+1\), in particular, \(X_{n}^{(L)}=y_{n}\). With \(V_{n}^{(l)}\) being * \(V_{n}^{(l)}=\sum\limits_{m=1}^{N_{l-1}}\omega_{nm}^{(l)}X_{m}^{(l-1)}\) * \(e_{n}(d_{n},y_{n})\) error function. \(d_{n}\) is the desired output for neuron \(n\) of the output layer. * \(\mathcal{L}=\sum\limits_{n}^{N_{L}}e_{n}\) cost or loss function. * \(E=\frac{1}{P}\sum\limits_{p}^{P}\mathcal{L}_{p}\) minimum error function, with \(P\) the total number of training cases or the size of the desired batch size. ### Complex-Valued Backpropagation The loss function remains real-valued to minimize an empirical risk during the learning process. Despite the architectural change for handling complex-valued inputs, the main challenge of CVNN is the way to train such neural networks. A problem arises when implementing the learning algorithm (commonly known as backpropagation). The parameters of the network must be optimized using the gradient or any partial-derivative-based algorithm. However, standard complex derivatives only exist for the so-called _holomorphic_ or _analytic_ functions. Because of Liouville's theorem (discussed in Section 2.1), CVNNs are bound to use _non-holomorphic_ functions and therefore can not be derived using standard complex derivative definition. CVNNs bring in _non-holomorphic_ functions in at least two ways [16]: * with the loss function being minimize over complex parameters * with the _non-holomorphic_ complex-valued activation functions Liouville's theorem implications were considered to be a big problem around 1990 as some researchers believed it led to the impossibility of obtaining and/or analyzing the dynamics of the CVNNs [12]. However, Wirtinger calculus (discussed in Section 2.2) generalizes the notion of complex derivative, making the _holomorphic_ function a special case only, allowing researchers to successfully implement CVNNs. Under Wirtinger calculus, the gradient is defined as [17; 18]: \[\nabla_{z}f=2\,\frac{\partial f}{\partial\overline{z}}\,. \tag{18}\] When applying reverse-mode autodiff on the complex domain, some good technical reports can be found, such as [19] or [20]. However, it is left to be verified if _Tensorflow_ correctly applies the equations mentioned in these reports. Indeed, no official documentation could be found that asserts the implementation of these equations and _Wirtinger Calculus_ when using _Tensorflow_ gradient on complex variables. This is not the case with _PyTorch_, where they explicitly say that Wirtinger calculus is used when computing the derivative in the following link: pytorch.org/docs/stable/notes/autograd.html. In said link, they indirectly say also that "This convention matches TensorFlow's convention for complex differentiation [...]" referencing the implementation of Equation 18. However, we do know reverse-mode autodiff is the method used by _Tensorflow_[21]. When reverse engineering the gradient definition of _Tensorflow_, the conclusion discussed on the official _Tensorflow_'s GitHub repository issue report 3348 is that the gradient for \(f:\mathbb{C}\rightarrow\mathbb{C}\) is computed as: \[\nabla_{z}f=\overline{\left(\frac{\partial f}{\partial z}+\frac{\partial \overline{f}}{\partial z}\right)}=2\frac{\partial\mathrm{Re}(f)}{\partial \overline{z}}\,. \tag{19}\] For application purposes, as the loss function is real-valued, we are only interested in cases where \(f:\mathbb{C}^{n}\longrightarrow\mathbb{R}\) for what the above equation can be simplified as: \[\nabla_{z}f=2\frac{\partial f}{\partial\overline{z}}=\left(\frac{\partial f}{ \partial x}+i\frac{\partial f}{\partial y}\right)\,, \tag{20}\] which indeed coincides with Wirtinger calculus definition. For this reason, it was not necessary to implement autodiff from scratch, and _Tensorflow_'s algorithm was used instead. Figure 4: Feed-forward Neural Network Diagram. The mathematical equations on how to compute this figure are explained on Appendix B and it's implementation for automatic calculation on a CPU or GPU is described in Appendix C.1 and C.2. The analysis in the complex case is analogous to that made in the real-valued backpropagation on Appendix B. With the difference that now \(\sigma:\mathbb{C}\longrightarrow\mathbb{C}\), \(e_{n}:\mathbb{C}\longrightarrow\mathbb{R}\) and \(\omega_{ij}^{(l)},X_{n}^{(l)},V_{n}^{(l)},e_{n}^{(l)}\in\mathbb{C}\). Now, the chain rule is changed using (87) so Equation (94) changes to: \[\frac{\partial e}{\partial\omega}=\frac{\partial e}{\partial X}\frac{\partial X }{\partial V}\frac{\partial V}{\partial\omega}+\frac{\partial e}{\partial X} \frac{\partial X}{\partial\overline{V}}\frac{\partial\overline{V}}{\partial \omega}+\frac{\partial e}{\partial\overline{X}}\frac{\partial\overline{X}}{ \partial V}\frac{\partial V}{\partial\omega}+\frac{\partial e}{\partial \overline{X}}\frac{\overline{X}}{\partial\overline{V}}\frac{\partial \overline{V}}{\partial\omega}\,. \tag{21}\] Note that we have used the upper line to denote the conjugate for clarity. All subindexes have been removed for clarity but they stand the same as in Equation 94. As \(e_{n}:\mathbb{C}\longrightarrow\mathbb{R}\) then using the conjugation rule (Definition A.3): \[\begin{split}\frac{\partial e}{\partial\overline{X}}=\overline{ \left(\frac{\partial e}{\partial X}\right)}\,,\\ \frac{\partial\overline{X}}{\partial\overline{V}}=\overline{ \left(\frac{\partial X}{\partial V}\right)}\,,\\ \frac{\partial X}{\partial\overline{V}}=\overline{\left(\frac{ \partial\overline{X}}{\partial V}\right)}\,,\end{split} \tag{22}\] so that not all the partial derivatives must be calculated. Focusing our attention on the derivative \(\partial V/\partial\omega\), a differentiation between layer difference of \(V\) and \(\omega\) will be made in this approach. The simplest is the one where the layer from \(V_{n}^{(l)}\) is the same as the weight. Regardless of the complex domain, \(V_{n}^{(l)}\) is still equal to \(\sum\limits_{i}^{N_{l-1}}\omega_{ni}^{(l)}X_{i}^{(l-1)}\) for what the value of the derivative remains unchanged. For the wights (\(\omega\)) of the previous layer, the derivative is as follows: \[\begin{split}\frac{\partial V_{n}^{(l)}}{\partial\omega_{jk}^{(l -1)}}&=\frac{\partial\left(\sum\limits_{i}^{N_{l-1}}\omega_{nj}^ {(l)}X_{j}^{(l-1)}\right)}{\partial\omega_{jk}^{(l-1)}}\,,\\ &=\omega_{nj}^{(l)}\frac{\partial X_{j}^{(l-1)}}{\partial\omega_{ jk}^{(l-1)}}\,,\\ &=\omega_{nj}^{(l)}\left[\frac{\partial X_{j}^{(l-1)}}{\partial V _{j}^{(l-1)}}\frac{\partial V_{j}^{(l-1)}}{\partial\omega_{jk}^{(l-1)}}+\frac{ \partial X_{j}^{(l-1)}}{\partial\overline{V}_{j}^{(l-1)}}\frac{\partial \overline{V}_{j}^{(l-1)}}{\partial\omega_{jk}^{(l-1)}}\right].\end{split} \tag{23}\] Now by definition, \(V_{n}^{(l)}\) is analytic because of being a polynomial series and therefore is holomorphic as well. Using Theorem 2.5, \(\partial\overline{V}_{j}^{(l-1)}/\partial\omega_{jk}^{(l-1)}=0\). The second term could be removed. Therefore the equation is simplified to the following: \[\frac{\partial V_{n}^{(l)}}{\partial\omega_{jk}^{(l-1)}}=\omega_{nj}^{(l)} \frac{\partial X_{j}^{(l-1)}}{\partial V_{j}^{(l-1)}}\frac{\partial V_{j}^{(l- 1)}}{\partial\omega_{jk}^{(l-1)}}=\omega_{nj}^{(l)}\frac{\partial X_{j}^{(l-1)} }{\partial V_{j}^{(l-1)}}X_{k}^{(l-2)}\,. \tag{24}\] For the rest of the cases where the layers are farther apart, the equation is as follows: \[\begin{split}\frac{\partial V_{n}^{(l)}}{\partial\omega_{jk}^{(l )}}&=\frac{\partial\left(\sum\limits_{i}^{N_{l-1}}\omega_{ni}^{( l)}X_{i}^{(l-1)}\right)}{\partial\omega_{jk}^{(l-1)}}\,,\\ &=\sum\limits_{i}^{N_{l-1}}\omega_{nj}^{(l)}\left[\frac{\partial X _{i}^{(l-1)}}{\partial V_{i}^{(l-1)}}\frac{\partial V_{i}^{(l-1)}}{\partial \omega_{jk}^{(h)}}+\frac{\partial X_{i}^{(l-1)}}{\partial\overline{V}_{i}^{(l-1 )}}\frac{\partial\overline{V}_{i}^{(l-1)}}{\partial\omega_{jk}^{(h)}}\right], \end{split} \tag{25}\] where \(h\leq l-2\). Therefore, based on Equations 25 and 24, the final equation remains as follows: \[\frac{\partial V_{n}^{(l)}}{\partial\omega_{jk}^{(h)}}=\begin{cases}X_{j}^{(l-1)} &h=l\,,\\ \omega_{nj}^{(l)}\frac{\partial X_{j}^{(l-1)}}{\partial V_{j}^{(l-1)}}\frac{ \partial V_{j}^{(l-1)}}{\partial\omega_{jk}^{(l-1)}}&h=l-1\,,\\ \sum_{i}^{N_{l-1}}\omega_{nj}^{(l)}\Bigg{[}\frac{\partial X_{i}^{(l-1)}}{ \partial V_{i}^{(l-1)}}\frac{\partial V_{i}^{(l-1)}}{\partial\omega_{jk}^{(h) }}+\frac{\partial X_{i}^{(l-1)}}{\partial\overline{V}_{i}^{(l-1)}}\frac{ \partial\overline{V}_{i}^{(l-1)}}{\partial\omega_{jk}^{(h)}}\Bigg{]}&h\leq l- 2\,.\end{cases} \tag{26}\] Using the property that \(\partial\overline{V}/\partial\omega=\overline{(\partial V/\partial\varpi)}\) and the distributed properties of the conjugate, the following equation can be derived: \[\frac{\partial\overline{V}_{n}^{(l)}}{\partial\omega_{jk}^{(h)}}=\begin{cases} 0&h=l\,,\\ \varpi_{nj}^{(l)}\frac{\partial\overline{X}_{j}^{(l-1)}}{\partial V_{j}^{(l- 1)}}\frac{\partial V_{j}^{(l-1)}}{\partial\omega_{jk}^{(l-1)}}&h=l-1\,,\\ \sum_{i}^{N_{l-1}}\overline{\omega}_{nj}^{(l)}\Bigg{[}\frac{\partial\overline {X}_{i}^{(l-1)}}{\partial V_{i}^{(l-1)}}\frac{\partial V_{i}^{(l-1)}}{ \partial\omega_{jk}^{(h)}}+\frac{\partial\overline{X}_{i}^{(l-1)}}{\partial \overline{V}_{i}^{(l-1)}}\frac{\partial\overline{V}_{i}^{(l-1)}}{\partial \omega_{jk}^{(h)}}\Bigg{]}&h\leq l-2\,.\end{cases} \tag{27}\] Using Equations 26 and 27, we can calculate all possible values of \(\partial V/\partial\omega\) and \(\partial\overline{V}/\partial\omega\). Once defined the loss and the activation function, using Equation 21, backpropagation can be made also in the complex plane. #### 2.4.1 Hansch and Hellwich definition Ronny Hansch and Olaf Hellwich [22] made a similar approach for the general equations of complex neural networks by using the complex chain rule. Using (21), they define \(X\) and \(V\) from the same layer as the weight instead of \(e\). By doing so, and using the fact that \(\partial\overline{V}_{j}^{(l-1)}/\partial\omega_{jk}^{(l-1)}=0\), two terms are deleted. In conjunction with the complex equivalent of Equation 95, Equation 21 is simplified to: \[\frac{\partial e_{n}}{\partial\omega_{ji}^{(l)}} =\frac{\partial e_{n}}{\partial X_{i}^{(l)}}\frac{\partial X_{i}^ {(l)}}{\partial V_{i}^{(l)}}\frac{\partial V_{i}^{(l)}}{\partial\omega_{ji}^{ (l)}}+\frac{\partial e_{n}}{\partial\overline{X}_{i}^{(l)}}\frac{\partial \overline{X}_{i}^{(l)}}{\partial V_{i}^{(l)}}\frac{\partial V_{i}^{(l)}}{ \partial\omega_{ji}^{(l)}}\,, \tag{28}\] \[=\frac{\partial e_{n}}{\partial X_{i}^{(l)}}\frac{\partial X_{i}^ {(l)}}{\partial V_{i}^{(l)}}X_{j}^{(l-1)}+\frac{\partial e_{n}}{\partial \overline{X}_{i}^{(l)}}\frac{\partial\overline{X}_{i}^{(l)}}{\partial V_{i}^{( l)}}X_{j}^{(l-1)}\,.\] Now instead of making the analysis for \(\partial V/\partial\omega\), the equivalent analysis will be made for \(\partial e/\partial X\). The case with \(l=L\) is the trivial one when its value depends on the chosen error function. For \(l=L-1\), the following equality applies: \[\frac{\partial e_{n}}{\partial X_{i}^{(L-1)}}=\frac{\partial e_{n}}{\partial V _{n}^{L}}\frac{\partial V_{n}^{L}}{\partial X_{i}^{(L-1)}}+\frac{\partial e_{n }}{\partial\overline{V}_{n}^{L}}\frac{\partial\overline{V}_{n}^{L}}{\partial X _{i}^{(L-1)}}\,. \tag{29}\] However, as \(V_{n}^{(L)}=\sum\limits_{i}\omega_{ni}^{(L)}X_{i}^{(L-1)}\), its derivatives are: \[\frac{\partial V_{n}^{(l+1)}}{\partial X_{i}^{(l)}} =\omega_{ni}^{(l+1)}\,, \tag{30}\] \[\frac{\partial\overline{V}_{n}^{(l+1)}}{\partial X_{i}^{(l)}} =0\,.\] For that reason, the second term of Equation 29 is deleted, and using the chain rule again we have: \[\begin{split}\frac{\partial e_{n}}{\partial X_{i}^{(L-1)}}& =\frac{\partial e_{n}}{\partial V_{n}^{(L)}}\frac{\partial V_{n}^{(L)}}{ \partial X_{i}^{(L-1)}}\,,\\ &=\frac{\partial e_{n}}{\partial X_{n}^{(L)}}\frac{\partial X_{n }^{(L)}}{\partial V_{n}^{(L)}}\frac{\partial V_{n}^{(L)}}{\partial X_{i}^{(L -1)}}+\frac{\partial e_{n}}{\partial\overline{X}_{n}^{L}}\frac{\partial \overline{X}_{n}^{(L)}}{\partial V_{n}^{(L)}}\frac{\partial V_{n}^{(L)}}{ \partial X_{i}^{(L-1)}}\,,\\ &=\frac{\partial e_{n}}{\partial X_{n}^{(L)}}\frac{\partial X_{n }^{(L)}}{\partial V_{n}^{(L)}}\omega_{in}^{(L)}+\frac{\partial e_{n}}{ \partial\overline{X}_{n}^{(L)}}\frac{\partial\overline{X}_{n}^{(L)}}{ \partial V_{n}^{(L)}}\omega_{in}^{(L)}\,,\end{split} \tag{31}\] where the partial derivatives depend on the definition of the loss and activation function. For the general case of \(l\leq L-2\) the derivation is similar to the previous one: \[\begin{split}\frac{\partial e_{n}}{\partial X_{i}^{(l)}}& =\sum_{k}\frac{\partial e_{n}}{\partial V_{k}^{l+1}}\frac{\partial V _{k}^{l+1}}{\partial X_{i}^{(l)}}+\frac{\partial e_{n}}{\partial\overline{V}_ {k}^{l+1}}\frac{\partial\overline{V}_{k}^{l+1}}{\partial X_{i}^{(l)}}\,,\\ &=\sum_{k}\frac{\partial e_{n}}{\partial V_{k}^{l+1}}\frac{ \partial V_{k}^{l+1}}{\partial X_{i}^{(l)}}\,,\\ &=\sum_{k}\frac{\partial e_{n}}{\partial V_{k}^{l+1}}\omega_{ik}^ {(l+1)}\,,\\ &=\sum_{k}\frac{\partial e_{n}}{\partial X_{k}^{(l+1)}}\frac{ \partial X_{k}^{(l+1)}}{\partial V_{k}^{(l+1)}}\omega_{ik}^{(l+1)}+\frac{ \partial e_{n}}{\partial\overline{X}_{k}^{(l+1)}}\frac{\partial\overline{X}_ {k}^{(l+1)}}{\partial V_{k}^{(l+1)}}\omega_{ik}^{(l+1)}\,.\end{split} \tag{32}\] Therefore, \(\partial e/\partial X\) can be defined as: \[\begin{split}\frac{\partial e_{n}}{\partial X_{i}^{(l)}}& =\begin{cases}\frac{\partial e_{n}}{\partial X_{i}^{(L)}}\,,&l=L\,,\\ \frac{\partial e_{n}}{\partial X_{i}^{(L)}}\frac{\partial X_{i}^{(L)}}{ \partial V_{n}^{(L)}}\omega_{in}^{(L)}+\frac{\partial e_{n}}{\partial \overline{X}_{n}^{(L)}}\frac{\partial\overline{X}_{n}^{(L)}}{\partial V_{n}^{ (L)}}\omega_{in}^{(L)}\,,&l=L-1\,,\\ \sum_{k}\left(\frac{\partial e_{n}}{\partial X_{k}^{(l+1)}}\frac{ \partial X_{k}^{(l+1)}}{\partial V_{k}^{(l+1)}}\omega_{ik}^{(l+1)}+\frac{ \partial e_{n}}{\partial\overline{X}_{k}^{(l+1)}}\frac{\partial\overline{X}_ {k}^{(l+1)}}{\partial V_{k}^{(l+1)}}\omega_{ik}^{(l+1)}\right)\,,\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad l \leq L-2\,.\end{split} \tag{33}\] As \(e_{n}:\mathbb{C}\longrightarrow\mathbb{R}\), then applying (A.3): \[\frac{\partial e_{n}}{\partial\overline{X}_{i}^{(l)}}=\overline{\left(\frac{ \partial e_{n}}{\partial X_{i}^{(l)}}\right)}\,. \tag{34}\] Using this latest equality with Equations 28 and 33, the backpropagation algorithms are fully defined. ## 3 Complex-Valued layers CVNN, as opposed to conventional RVNN, possesses complex-valued input, which allows working with imaginary data without any pre-processing needed to cast its values to the real-valued domain. Each layer of the complex network operates analogously to a real-valued layer with the difference that its operations are on the complex domain (addition, multiplication, convolution, etc.) with trainable parameters being complex-valued (weights, bias, kernels, etc.). Activation functions are also defined on the complex domain so that \(f:\mathbb{C}\rightarrow\mathbb{C}\) and will be described on Section 4. A wide variety of complex layers is supported by the library, and the full list can be found in complex-valued-neural-networks.rtfd.io/en/latest/layers.html. Some layers, such as dense layers, can be extended naturally as addition and multiplication are defined in the complex domain. Therefore, just by making the neurons complex-valued, their behavior is evident. The same analogy can be made for convolutional layers as the transformation from the complex to the real plane does not change the resolution of the image to justify increasing the kernel size or changing the stride. Special care must be taken when implementing ComplexDropout as applying it to both real and imaginary parts separately will result in unexpected behavior as the ignoring weights will not be coincident, e.g., one might mask the real part while using the imaginary part for the computation. This, however, was taken into account for the layer implementation. The usage of this layer is analogous to _Tensorflow_ Dropout layer, which also uses the boolean training parameter indicating whether the layer should behave in training mode (adding dropout) or in inference mode (doing nothing). Other layers, such as ComplexFlatten or ComplexInput, needed to be implemented as _Tensorflow_ equivalent Flatten and Input cast the output to float. ### Complex Pooling Layers Pooling layers are not so straightforward. In the complex domain, their values are not ordered as the real values, meaning there is no sense of a maximum value, rendering it impossible to implement a Max Pooling layer directly on the input. Reference [23] proposes to use the norm of the complex figure to make this comparison, and this method is used for the toolbox implementation of the ComplexMaxPooling layer. Average Pooling opens the possibility to other interpretations as well. Even if, for computing the average, we could add the complex numbers and divide by the total number of terms as one would do with real numbers, another option arises known as circular mean. The circular or angular mean is designed for angles and similar cyclic quantities. When computing the average of \(2+0\,i\) and \(0+1\,i\), the conventional complex mean will yield \(1+0.5\,i\) for what the angle will be \(\pi/6\) although the vectors had angles of \(\pi/2\) and \(0\) (Figure 4(a)). The circular mean consists of normalizing the values before computing the mean, which yields \(0.5+0.5\,i\), having an angle of \(\pi/4\) (Figure 4(b)). The circular mean will have a norm inside the unit circle. It will be at the unit circle if all angles are equal, and it will be null if the angles are equally distributed. Another option for computing the mean is to use the circular mean definition for the angle and then compute the arithmetic mean of the norm separately, as represented in Figure 6. All these options were used for computing the average pooling so the user could choose the case that fits their data best. ### Complex Upsampling layers Upsampling techniques, which enable the enlargement of 2D images, were implemented. In particular, 3 techniques were applied, all of them documented in complex-valued-neural-networks.readthedocs.io/en/latest/layers/complex_upsampling.html. * **Complex Upsampling** Upsampling layer for 2D inputs. The upsampling can be done using the nearest neighbor or bilinear interpolation. There are at least two possible ways to implement the upsampling method depending if the corners are aligned or not (see Figure 7). Our implementation does not align corners. * **Complex Transposed Convolution** Sometimes called **Deconvolution** although it does not compute the inverse of a convolution [25]. * **Complex Un-Pooling** Inspired on the functioning of Max Un-pooling explained in Reference [26]. Max un-pooling technique receives the maxed locations of a previous Max Pooling layer and then expands an image by placing the input values on those locations and filling the rest with zeros as shown in Figure 8. Figure 5: Mean example for two complex values. Figure 6: Circular mean with norm average. Figure 7: Upsampling alignments options extracted from [24]. Figure 8: Max Unpooling graphical explanation extracted from [26]. Complex un-pooling locations are not forced to be the output of a max pooling layer. However, in order to use it, we implemented as well a layer class named ComplexMaxPooling2DWithArgmax which returns a tuple of tensors, the max pooling output and the maxed locations to be used as input of the un-pooling layer. There are two main ways to use the unpooling layer, either by using the expected output shape or using the upsampling_factor parameter. ``` 1fromcvnn.layersimportComplexUnPooling2D,complex_input,ComplexMaxPooling2DWithArgmax 2importtensorflowastf 3importnumpyasnp 4 5x=get_img()#Getsanimagestfortheexample 6#Removesthebatchsizeshape 7inputs=complex_input(shape=x.shape[1:]) 8#Applymax-poolingandalsogetargmax 9max_pool_o,max_arg=ComplexMaxPooling2DWithArgmax(strides=1,data_format="channels_last",name="argmax")(inputs) 10#AppliestheUnpooling 11outputs=ComplexUnPooling2D(x.shape[1:])([max_pool_o,max_arg]) 12 13model=tf.keras.Model(inputs=inputs,outputs=outputs,name="pooling_model") 14model.summary() 15model(x) ``` It is possible to work with variable size images using a partially defined tensor, for example, shape=(None,None,3). In this case, the second option (using upsampling_factor) is the only way to deal with them in the following manner. ``` 1#InputisanunknownsizeRGginage 2inputs=complex_input(shape=(None,None,3)) 3max_pool_o,pool_argmax=ComplexMaxPooling2DWithArgmax(strides=1,data_format="channels_last",name="argmax")(inputs) 4unpool=ComplexUnPooling2D(upsampling_factor=2)([max_pool_o,pool_argmax]) 5 6model=tf.keras.Model(inputs=inputs,outputs=outputs,name="pooling_model") 7model.summary() 8model(x) ``` All the discussed layers in this Section have a dtype parameter which defaults to tf.complex64, however, if tf.float32 or tf.float64 is used, the layer behaviour should be arithmetically equivalent to the corresponding _Tensorflow_ layer, allowing for fast test and comparison. In some cases, for example, ComplexFlatten, this parameter has no effect as the layer can already deal with both complex- and real-valued input. Also, a method get_real_equivalent is implemented which returns a new layer object with a real-valued dtype and allows a output_multiplier parameter in order to re-dimension the real network if needed. This is used to obtain an equivalent real-valued network as described [27]. ## 4 Complex Activation functions One of the essential characteristics of CVNN is its activation functions, which should be non-linear and complex-valued. An activation function is usually chosen to be piece-wise smooth to facilitate the computation of the gradient. The complex domain widens the possibilities to design an activation function, but the probable more natural way would be to extend a real-valued activation function to the complex domain. Our toolbox currently supports a wide range of complex-activation functions listed on act_dispatcher dictionary. ``` 1act_dispatcher={ 2'linear':linear, 3#Complexinput,realoutput 4'cast_to_real':cast_to_real, 5'convert_to_real_with_abs':convert_to_real_with_abs, 6'sigmoid_real':sigmoid_real, 7'softmax_real_with_abs':softmax_real_with_abs, 8'softmax_real_with_avg':softmax_real_with_avg, 9'softmax_real_with_mult':softmax_real_with_mult, * 10^softmax_of_softmax_real_with_mult^:softmax_of_softmax_real_with_mult, * 11^softmax_of_softmax_real_with_avg':softmax_of_softmax_real_with_avg, * 12^softmax_real_with_polar':softmax_real_with_polar, * 13#Phasor networks * 14^georgiou_cdbp':georgiou_cdbp, * 15^mwn_activation':mwn_activation, * 16^complex_ssignum':complex_signment, * 17#TypeA(cartesian) * 18^cart_sigmoid':cart_sigmoid, * 19^cart_elu':cart_elu, * 10^cart_exponential':cart_exponential, * 11^cart_hard_sigmoid':cart_hard_sigmoid, * 12^cart_relu':cart_relu, * 13^cart_leaky_relu':cart_leaky_relu, * 14^cart_selu':cart_selu, * 15^cart_softplus':cart_softplus, * 16^cart_softsign':cart_softsign, * 17^cart_tanh':cart_tanh, * 18^cart_softmax':cart_softmax, * 19#TypeB(polar) * 10^pol_tanh':pol_tanh, * 11^pol_sigmoid':pol_sigmoid, * 12^pol_selu':pol_selu, * 13#ElementaryTranscendentalFunctions(ETF) * 14"eff_circular_tan':etf_circular_tan, * 15^eff_circular_sin':eff_circular_sin, * 16^get_inv_circular_atan':etf_inv_circular_atan, * 17^eff_inv_circular_asin':etf_inv_circular_asin, * 18^eff_inv_circular_acos':etf_inv_circular_acos, * 19^eff_circular_tanh':etf_circular_tanh, * 10^eff_circular_sinh':etf_circular_sinh, * 11^eff_inv_circular_atan':etf_inv_circular_atanh':etf_inv_circular_atanh, * 12^eff_inv_circular_asinh':etf_inv_circular_asinh, * 13#ReLU * 14^modrelu':modrelu, * 15^crelu':crelu, * 16^zrelu':zrelu, * 17^complex_cardioid':complex_cardioid * ``` Listing 10: The \(\mathtt{\tttt C}\)-\(\mathtt{\tt C}\)-\(\mathtt{ * Type-A: \(\sigma_{A}(z)=\sigma_{\rm Re}\left({\rm Re}(z)\right)+i\,\sigma_{\rm Im}\left({ \rm Im}(z)\right)\), * Type-B: \(\sigma_{B}(z)=\sigma_{r}(|z|)\,\exp\left(i\,\sigma_{\phi}({\rm arg}(z))\right)\), where \(\sigma_{\rm Re},\sigma_{\rm Im},\sigma_{r},\sigma_{\phi}\) are all real-valued functions2. \({\rm Re}\) and \({\rm Im}\) operators are the real and imaginary parts of the input, respectively, and the \({\rm arg}\) operator gives the phase of the input. Note that in Type-A, the real and imaginary parts of an input go through nonlinear functions separately, and in Type-B, the magnitude and phase go through nonlinear functions separately. Footnote 2: Although not with the same notation, these two types of complex-valued activation functions are also discussed in Section 3.3 of [3] The most popular activation functions, sigmoid, hyperbolic tangent (tanh) and Rectified Linear Unit (ReLU), are extensible using Type-A or Type-B approach. Although tanh is already defined on the complex domain for what, its transformation is probably less interesting. Other complex-activation functions are supported by our toolbox including elementary transcentental functions (complex-valued-neural-networks.rtfd.io/en/latest/activations/etf.html) [32; 33] or phasor activation function (complex-valued-neural-networks.rtfd.io/en/latest/activations/mvn_activation.html) such as multi-valued neuron (MVN) activation function [34; 35] or Georgiou CDBP [36]. ### Complex Rectified Linear Unit (ReLU) Normally, \(\sigma_{\phi}\) is left as a linear mapping [31; 3]. Under this condition, using Rectified Linear Unit (ReLU) activation function for \(\sigma_{r}\) has a limited interest since the latter makes \(\sigma_{B}\) converge to a linear function, limiting Type-B ReLU usage. Nevertheless, ReLU has increased in popularity over the others as it has proved to learn several times faster than equivalents with saturating neurons [37]. Consequently, probably the most common complex-valued activation function is Type-A ReLU activation function more often defined as Complex-ReLU or \(\mathbb{CRe}\)LU [7; 38]. However, several other ReLU adaptations to the complex domain were defined throughout the bibliography as zReLU [39], defined as \[{\rm zReLU}(z)=\begin{cases}z&\text{if }0<{\rm arg}(z)<\pi/2\\ 0&\text{if otherwise}\end{cases}\,, \tag{35}\] letting the output as the input only if both real and imaginary parts are positive. Another popular adaptation is modReLU [40], defined as \[{\rm modReLU}(z)=\begin{cases}{\rm ReLU}\left(|z|+b\right)\frac{z}{|z|}& \text{if }|z|\geq b\\ 0&\text{if otherwise}\end{cases}\,, \tag{36}\] where \(b\) is an adaptable parameter defining a radius along which the output of the function is 0. This function provides a point-wise non-linearity that affects only the absolute value of a complex number. Another extension of ReLU, is the complex cardioid proposed by [41] \[\sigma\big{(}z\big{)}=\frac{\left(1+\cos\left({\rm arg}(z)\right)\right)z}{2}\,. \tag{37}\] This function maintains the phase information while attenuating the magnitude based on the phase itself. These last three activation functions (cardioid, zReLU and modReLU) were analyzed and compared against each other in [28]. The discussed variants were implemented in the toolbox documented as usual in complex-valued-neural-networks.rtfd.io/en/latest/activations/relu.html. ### Output layer activation function The image domain of the output layer depends on the set of data labels. For classification tasks, real-valued integers or binary numbers are frequently used to label each class. For these cases, one option would be to cast the labels to the complex domain as done in [23], where a transformation is done to a label \(c\in\mathbb{R}\) like \(T:c\to c+i\,c\). The second option is to use an activation function \(\sigma:\mathbb{C}\rightarrow\mathbb{R}\) as the output layer. A popular real-valued activation used for classification tasks is the _softmax_ function [42] (normalized exponential), which maps the magnitude to \([0,1]\), so the image domain is homogeneous to a probability. There are several options on how to transform this function to accept complex input and still have its image \(\epsilon\in[0;1]\). These options include either performing an average of the magnitudes \(\sigma_{\rm Re},\sigma_{\rm Im}\) or \(\sigma_{r},\sigma_{\phi}\), using only one of the magnitudes like \(\sigma_{r}\) or apply the real-valued _softmax_ to either the addition or multiplication of \(\sigma_{\mathrm{Re}},\sigma_{\mathrm{Im}}\) or \(\sigma_{r},\sigma_{\phi}\), between other options. Most of these variants are implemented in the library detailed in this Chapter and documented in complex-valued-neural-networks.rtfd.io/en/latest/activations/real_output.html. ## 5 Complex-compatible Loss functions For CVNNs, the loss or cost function to minimize will have a real-valued output as one can not look for the minimum of two complex numbers. If the application is that of classification or semantic segmentation (as it is in all the cases of study of this work), there are a few options on what to do. Some loss functions support this naturally. Reference [43] compares the performance of different type of complex input compatible loss functions. If the loss function to be used does not support complex-valued input, a popular option is to manage this through the output activation function as explained in the previous Section 4.2. However, a second option for non-complex-compatible loss functions such as _categorical cross-entropy_ is to compare both the real and imaginary parts of the prediction independently with the labels and compute the loss function as an average of both results. Reference [38], for example, defines a loss function as the complex average cross-entropy as: \[\mathcal{L}^{ACE}=\frac{1}{2}\left[\mathcal{L}^{CCE}\left(\mathrm{Re}(y),d \right)+\mathcal{L}^{CCE}\left(\mathrm{Im}(y),d\right)\right], \tag{38}\] where \(\mathcal{L}^{ACE}\) is the complex average cross-entropy, \(\mathcal{L}^{CCE}\) is the well-known categorical cross-entropy. \(y\) is the network predicted output, and \(d\) is the corresponding ground truth or desired output. For real-valued output \(\mathcal{L}^{ACE}=\mathcal{L}^{CCE}\). This function was implemented in the published code alongside other variants, such as multiplying each class by weight for imbalanced classes or ignoring unlabeled data. All these versions are documented in complex-valued-neural-networks.rtfd.io/en/latest/losses.htm. When the desired output is already complex-valued (regression tasks), more natural definitions can be used, such as proposed by [29], where the loss is defined as \[\mathcal{L}=\frac{1}{2}\sum_{k}e_{k}\overline{e_{k}}\,, \tag{39}\] where \(e_{k}(y_{k},d_{k})\) is a complex error computation of \(y_{k}\) and \(d_{k}\) such as a subtraction. ## 6 Complex Batch Normalization The complex Batch Normalization (BN) was adapted from the real-valued BN technique by Reference [7]. For normalizing a complex vector, we will treat the problem as a 2D vector instead of working on the complex domain so that \(z=a+i\,b\in\mathbb{C}\longrightarrow\mathbf{x}=(a,b)\in\mathbb{R}^{2}\). To normalize a complex variable, we need to compute \[\mathbf{o}=\hat{\mathbf{\Sigma}}^{-\frac{1}{2}}(\mathbf{x}-\hat{\mathbf{\mu}})\,, \tag{40}\] where \(\mathbf{o}\) is the normalized output, \(\hat{\mathbf{\mu}}\) is the mean estimate of \(\mathbb{E}[\mathbf{x}]\), and \(\hat{\mathbf{\Sigma}}\in\mathbb{R}^{2\times 2}\) is the estimated covariance matrix of \(\mathbf{x}\) so that \[\hat{\mathbf{\Sigma}}=\left[\begin{array}{cc}\Sigma_{rr}&\Sigma_{ri}\\ \Sigma_{ir}&\Sigma_{ii}\end{array}\right]=\left[\begin{array}{cc}\mathrm{ Cov}(\mathrm{Re}(x)\mathrm{Re}(x))&\mathrm{Cov}(\mathrm{Re}(x)\mathrm{Im}(x))\\ \mathrm{Cov}(\mathrm{Im}(x)\mathrm{Re}(x))&\mathrm{Cov}(\mathrm{Im}(x)\mathrm{ Im}(x))\end{array}\right]. \tag{41}\] During the batch normalization layer initialization, two variables \(\mathbf{\Sigma}^{\prime}\in\mathbb{R}^{2\times 2}\) (moving variance) and \(\mathbf{\mu}^{\prime}\in\mathbb{R}^{2}\) (moving mean) are initialized. By default, \(\mathbf{\Sigma}^{\prime}=\mathbf{I}/\sqrt{2}\) and \(\mathbf{\mu}^{\prime}\) is initialized to zero. During the training phase, \(\hat{\mathbf{\Sigma}}\) and \(\hat{\mathbf{\mu}}\) are computed on the innermost dimension of the training input batch (for multi-dimensional inputs where \(z\in\mathbb{C}^{N}\to x\in\mathbb{R}^{N\times 2}\)). The output of the layer is then computed as in Equation 40. The moving variance and moving mean are iteratively updated using the following rule: \[\mathbf{\mu}^{\prime}_{k+1} =m\,\mathbf{\mu}^{\prime}_{k}+(1-m)\,\hat{\mathbf{\mu}}_{k} \tag{42}\] \[\mathbf{\Sigma}^{\prime}_{k+1} =m\,\mathbf{\Sigma}^{\prime}_{k}+(1-m)\,\hat{\mathbf{\Sigma}}_{k}\,, \tag{43}\] where \(m\) is the momentum, a constant parameter set to \(0.99\) by default. During the inference phase, that is, for example, when performing a prediction, no variance nor average is computed. The output is directly calculated using the moving variance and moving average as \[\hat{\mathbf{x}}=\mathbf{\Sigma}^{\prime-\frac{1}{2}}(\mathbf{x}-\mathbf{\mu}^{\prime})\,. \tag{44}\] Analogously to the real-valued batch normalization, it is possible to shift and scale the output by using the trainable parameters \(\mathbf{\beta}\) and \(\mathbf{\Gamma}\). In this case, the output \(\mathbf{o}\) for both the training and prediction phase will be changed to \(\hat{\mathbf{o}}=\mathbf{\Gamma}\mathbf{o}+\mathbf{\beta}\). By default, \(\mathbf{\beta}\) is initialized to \((0,0)^{T}\in\mathbb{R}^{2}\) and \[\mathbf{\Gamma}=\left(\begin{array}{cc}1/\sqrt{2}&0\\ 0&1/\sqrt{2}\end{array}\right)\,. \tag{45}\] ## 7 Complex Random Initialization If we are to blindly apply any well-known random initialization algorithm to both real and imaginary parts of each trainable parameter independently, we might lose the special properties of the used initialization. This is the case, for example, for Glorot, also known as Xavier, initializer [44]. Assuming that: * The input features have the same variance \(\operatorname{Var}\left[X_{i}^{(0)}\right]\triangleq\operatorname{Var} \left[X^{(0)}\right],\forall i\in\left[[1;N_{0}]\right]\) and have zero mean (can be adjusted by the bias input). * All the weights are statistically centered, and there is no correlation between real and imaginary parts. * The weights at layer \(l\) share the same variance \(\operatorname{Var}\left[\omega_{i,j}^{(l)}\right]\triangleq\operatorname{ Var}\left[\omega^{(l)}\right],\forall(i,j)\in\left[[1;N_{l+1}]\right]\times\left[[1;N_{l} ]\right]\) and are statistically independent of the others layer weights and of inputs \(X^{(0)}\). * We are working on the linear part of the activation function. Therefore, \(\sigma(z)\approx z\), which is the same as saying that \(\sigma(z,\overline{z})\approx z\). The partial derivatives will then be \[\begin{cases}\dfrac{\partial\sigma}{\partial\overline{z}}\approx 1\\ \dfrac{\partial\sigma}{\partial\overline{z}}\approx 0\end{cases}\] (46) so that \(\dfrac{\partial\sigma\left(V_{n}^{(l)}\right)}{\partial V_{n}^{(l)}}\approx 1\) and \(\dfrac{\partial\overline{\sigma\left(V_{n}^{(l)}\right)}}{\partial V_{n}^{(l) }}\approx 0\), with \(V_{n}^{(l)}\) defined in Section 2.3. This is an acceptable assumption when working with logistic sigmoid or tanh activation functions. Using the notation of Section 2.3, for a dense feed-forward neural network with a bias initialized to zero (as is often the case), each neuron at hidden layer \(l\) is expressed as \[X_{n}^{(l)}\triangleq\sigma\left(V_{n}^{(l)}\right)=\sigma\left(\sum_{m=1}^{N_ {l-1}}\omega_{nm}^{(l)}X_{m}^{(l-1)}\right),\forall n\in\left[[1;N_{l}]\right]. \tag{47}\] Since \(\sigma\) is working on the linear part, from (47) we get that \[\operatorname{Var}\left[X_{n}^{(l)}\right]=\operatorname{Var}\left[\sum_{m=1}^ {N_{l-1}}\omega_{nm}^{(l)}X_{m}^{(l-1)}\right], \tag{48}\] where \(X_{m}^{(l-1)}\) is a combination of \(\omega^{(k)},1\leq k\leq l-1\) and \(x^{(0)}\), so it is independent of \(\omega^{(l)}\) which leads to \[\operatorname{Var}\left[X_{n}^{(l)}\right]=\sum_{m=1}^{N_{l-1}}\operatorname{ Var}\left[\omega_{nm}^{(l)}\right]\operatorname{Var}\left[X_{m}^{(l-1)} \right], \tag{49}\] As the weights share the same variance at each layer, \[\mathrm{Var}\big{[}X_{n}^{(l)}\big{]} =\mathrm{Var}\big{[}\omega^{(l)}\big{]}\sum_{m=1}^{N_{l-1}}\mathrm{ Var}\big{[}X_{m}^{(l-1)}\big{]}\,,\] \[=\mathrm{Var}\big{[}\omega^{(l)}\big{]}\mathrm{Var}\big{[}\omega^ {(l-1)}\big{]}\sum_{m=1}^{N_{l-1}}\sum_{p=1}^{N_{l-2}}\mathrm{Var}\big{[}X_{p}^ {(l-2)}\big{]}\,,\] \[=N_{l-1}\mathrm{Var}\big{[}\omega^{(l)}\big{]}\mathrm{Var}\big{[} \omega^{(l-1)}\big{]}\sum_{p=1}^{N_{l-2}}\mathrm{Var}\big{[}X_{p}^{(l-2)}\big{]}\,. \tag{50}\] We can now obtain the variance of \(X_{n}^{(l)}\) as a function of \(x^{(0)}\) by applying Equation (50) recursively and assuming \(X_{n}^{(0)},n=1,\ldots,N_{0}\) sharing the same variance, \[\mathrm{Var}\big{[}X_{n}^{(l)}\big{]}=\mathrm{Var}\big{[}x^{(0)}\big{]}\prod_{ m=1}^{l}N_{m-1}\mathrm{Var}\big{[}\omega^{(m)}\big{]}\,. \tag{51}\] From a forward-propagation point of view, to keep a constant flow of information, then \[\mathrm{Var}\big{[}X_{n}^{(l)}\big{]}=\mathrm{Var}\big{[}X_{n}^{(l^{\prime})} \big{]}\,,\forall 1\leq l<l^{\prime}\leq N \tag{52}\] which implies that \(N_{m-1}\mathrm{Var}\big{[}\omega^{(m)}\big{]}=1\,,\forall 1\leq m\leq N\). On the other hand, \[\frac{\partial\mathcal{L}}{\partial V_{n}^{(l)}} =\sum_{k=1}^{N_{l+1}}\frac{\partial\mathcal{L}}{\partial V_{k}^{ (l+1)}}\frac{\partial V_{k}^{(l+1)}}{\partial X_{n}^{(l)}}\frac{\partial X_{n }^{(l)}}{\partial V_{n}^{(l)}}+\frac{\partial\mathcal{L}}{\partial V_{k}^{(l+ 1)}}\frac{\partial\overline{V_{k}^{(l+1)}}}{\partial X_{n}^{(l)}}\frac{ \partial X_{n}^{(l)}}{\partial V_{n}^{(l)}}+R\,,\] \[=\sum_{k=1}^{N_{l+1}}\frac{\partial\mathcal{L}}{\partial V_{n}^{ (l+1)}}\omega_{k,n}^{(l+1)}\,. \tag{53}\] where R is the remaining terms depending on \(\frac{\partial\overline{X_{n}^{(l)}}}{\partial V_{n}^{(l)}}\), which is assumed to be \(0\) because the activation function is working in the linear regime at the initialization, i.e. \(\frac{\partial X_{n}^{(l)}}{\partial V_{n}^{(l)}}\approx 1\) and \(\frac{\partial\overline{X_{n}^{(l)}}}{\partial V_{n}^{(l)}}\approx 0\). The last equality is held since \(\frac{\partial V_{k}^{(l+1)}}{\partial X_{n}^{(l)}}=\omega_{k,n}^{(l+1)}\) and \(\frac{\partial V_{k}^{(l+1)}}{\partial X_{n}^{(l)}}=0\) (see (33) in Appendix for the general case). Assuming the loss variation w.r.t. the output neuron is statistically independent of any weights at any layers, then we can deduce recursively from (53), \[\mathrm{Var}\Bigg{[}\frac{\partial\mathcal{L}}{\partial V_{n}^{(l)}}\Bigg{]}= \mathrm{Var}\Bigg{[}\frac{\partial\mathcal{L}}{\partial V_{n}^{(L)}}\Bigg{]} \prod_{m=l+1}^{L}N_{m}\mathrm{Var}\big{[}\omega^{(m)}\big{]}\,. \tag{54}\] From a back-propagation point of view, we want to keep a constant learning flow: \[\mathrm{Var}\Bigg{[}\frac{\partial\mathcal{L}}{\partial V_{n}^{(l)}}\Bigg{]}= \mathrm{Var}\Bigg{[}\frac{\partial\mathcal{L}}{\partial V_{n}^{(l)}}\Bigg{]},\,\forall 1\leq l<l^{\prime}\leq N, \tag{55}\] which implies \(N_{m}\mathrm{Var}\big{[}\omega^{(m)}\big{]}=1,\,\forall 1\leq m\leq N\). Conditions (52) and (55) are not possible to be satisfied at the same time (unless \(N_{l}=N_{l+1},\,\forall 1\leq l<N\), meaning all layers should have the same width) for what Reference [44] proposes the following trade-of \[\mathrm{Var}\big{[}\omega^{(l)}\big{]}=\frac{2}{N_{l}+N_{l+1}}\,,\forall 1\leq l<N\,. \tag{56}\] If the weight initialization is a uniform distribution \(\sim U\), for the real-valued case, the initialization that has the variance stated on Equation 56 is: \[\omega^{(l)}\sim U\Bigg{[}-\frac{\sqrt{6}}{\sqrt{N_{l}+N_{l+1}}},\frac{\sqrt{6} }{\sqrt{N_{l}+N_{l+1}}}\Bigg{]}\,. \tag{57}\] For a complex variable with no correlation between real and imaginary parts, the variance is defined as: \[\mathrm{Var}\left[\omega^{(l)}\right]=\mathrm{Var}\left[\mathrm{Re}\left(\omega^{ (l)}\right)\right]+\mathrm{Var}\left[\mathrm{Im}\left(\omega^{(l)}\right) \right]\,, \tag{58}\] it is therefore logical to choose both variances \(\mathrm{Var}\left[\mathrm{Re}\left(\omega^{(l)}\right)\right]\) and \(\mathrm{Var}\left[\mathrm{Im}\left(\omega^{(l)}\right)\right]\) to be equal: \[\mathrm{Var}\left[\mathrm{Re}\left(\omega^{(l)}\right)\right]=\mathrm{Var} \left[\mathrm{Im}\left(\omega^{(l)}\right)\right]=\frac{1}{N_{l}+N_{l+1}}\,. \tag{59}\] With this definition, the complex variable could be initialized as: \[\mathrm{Re}\left(\omega^{(l)}\right)=\mathrm{Im}\left(\omega^{(l)}\right) \sim U\left[-\frac{\sqrt{3}}{\sqrt{N_{l}+N_{l+1}}},\frac{\sqrt{3}}{\sqrt{N_{l}+ N_{l+1}}}\right]\,. \tag{60}\] By comparing (57) with (60) it is concluded that to correctly implement a Glorot initialization, one should divide the real and imaginary part of the complex weight by \(\sqrt{2}\). It is also possible to define the initialization technique from a polar perspective. The variance definition is \[\mathrm{Var}\left[\omega^{(l)}\right]=E\left[\left|\omega^{(l)}-E\left[\omega^ {(l)}\right]^{2}\right|\right]=E\left[\left|\omega^{(l)}\right|^{2}\right]+ \left|E\left[\omega^{(l)}\right]\right|^{2}\,. \tag{61}\] By choosing the phase to be a uniform distribution between \(0\) and \(2\pi\) and knowing the absolute is always positive, then \(E\left[\omega^{(l)}\right]=0\) and Equation 61 can be simplified to: \[\mathrm{Var}\left[\omega^{(l)}\right]=E\left[\left|\omega^{(l)}\right|^{2} \right]\,. \tag{62}\] It will therefore suffice to choose any random initialization, such as, for example, a Rayleigh distribution [45], for \(\left|\omega^{(l)}\right|=\rho\in\mathbb{R}_{0}^{+}\) so that \[E\left[\rho^{2}\right]=\frac{2}{N_{l}+N_{l+1}}\,. \tag{63}\] ### Impact of complex-initialization equation application A simulation was done for a complex multi-layer network with four hidden layers of size 128, 64, 32 and 16, respectively, with a logistic sigmoid activation function to test the impact of this constant division by \(\sqrt{2}\) on a signal classification task. One-hundred and fifty epochs were done with one thousand runs of each model to obtain statistical results. The task consisted in classifying different signals used in radar applications. Temporal and time-frequency representations of each signal are shown in Figures 9 and 10 respectively. The generated signals are * Sweep or chirp signal. These are signals whose frequency changes over time. These types of signals are commonly applied to radar. The chirp-generated signals were of two types, either linear chirp, where the frequency changed linearly over time or S-shaped, whose frequency variation gets faster at both the beginning and the end, forming an S-shaped spectrum as can be seen in Figure 10. * Phase-Shift Keying (PSK) modulated signals, a digital modulation process that conveys data by changing the phase of a constant frequency reference signal. These signals were BPSK (2-phase states) and QPSK (4-phase states) * Quadrature Amplitude Modulation (QAM) signals, which are a combination of amplitude and phase modulation. These signals were 16QAM (4 phase and amplitude states) and 64QAM (8 phase and amplitude states). A noise signal (null), without any signal of interest, was also used, making a total of 7 different classes. Instead of using measured signals, and with the goal of facilitating the studies, the signals were randomly generated, which also allowed having the ground truth. These generated signals had the following properties: * 256 samples per signal * Peak-to-peak amplitude of 1 * Chirp signals with frequencies from 0.05 to 0.45 times the sample frequency * Number of moments between 8 and 64 for the codes BPSK, QPSK, 16QAM and 64QAM Thermal noise was added to each signal and was transformed to the complex domain using the Hilbert Transform, a popular transformation in signal processing applications [47], which provides an analytic mapping of a real-valued function to the complex plane. The Hilbert transpose has its origins in 1902 when the English mathematician Godfrey Harold Hardy (\(1877-1947\)) introduced a transformation that consisted in the convolution of a real function \(f(s)\)[48; 49], with the Cauchy kernel \(1/\pi(t-s)\) which, being an improper integral, must be defined in terms of its principal value (p.v.) [50], \[\mathcal{H}(f)(t)=\frac{1}{\pi}\text{p.v.}\int_{-\infty}^{+\infty}\frac{f(s)}{ t-s}ds =\frac{1}{\pi}\lim_{\varepsilon\to 0}\int_{\varepsilon}^{+\infty}\frac{f(t-s)-f (t+s)}{s}ds\,. \tag{64}\] One of the most important properties of this transformation is that its repeated application allows for the recovery of the original function, with only a change of sign, that is, \[g(t)=\mathcal{H}(f)(t)\Leftrightarrow f(t)=-\mathcal{H}(g)(t)\,. \tag{65}\] Figure 9: Temporal amplitude examples of used signals [46]. Figure 10: Spectrogram examples of used signals [46]. The functions \(f\) and \(g\) that satisfy this relation are called Hilbert transform pairs, in honor of David Hilbert, who first studied them in 1904 [51]. In fact, it is for this reason that in 1924 [52, 53], Hardy graciously proposed calling transformation (64) as Hilbert Transform. Some examples of Hilbert transform pairs are shown in Table 1. Considering the definition of the Fourier transform of an absolutely integrable real function \(f(s)\), \[\mathcal{F}(f)(t)\,. \tag{66}\] It can be shown that [50] \[\mathcal{F}\left(\mathcal{H}(f)(t)\right)=-i\,\text{sgn}(t)\mathcal{F}(f)(t)\,. \tag{67}\] This relation provides an effective way to evaluate the Hilbert transform \[\mathcal{H}(f)=-i\,\mathcal{F}^{-1}\left[\text{sgn}\mathcal{F}(f)(t)\right]\,, \tag{68}\] avoiding the issue of dealing with the singular structure of the Cauchy kernel. One of the most important properties of the Hilbert transform, at least in reference to this thesis, is that the real and imaginary parts of a function \(h(z)\) that is analytic in the upper half of the complex plane are Hilbert transform pairs. That is to say that \[\mathrm{Im}(h)=\mathcal{H}(\mathrm{Re}(h))\,. \tag{69}\] In this way, the Hilbert transform provides a simple method of performing the analytic continuation to the complex plane of a real function \(f(x)\) defined on the real axis, defining \(h(z)=f(z)+ig(z)\) with \(g(z)=\mathcal{H}(f)\). This property of the Hilbert transform was independently discovered by Ralph Kronig [54] (1904 - 1995), and Hans Kramers [55] (1894 - 1952) in 1926 in relation to the response function of physical systems, known as the Kramers-Kronig relation. At the same time, it began to be used in circuit analysis [56] in relation to the real and imaginary parts of the complex impedance. Through the work of pioneers such as the Nobel prize winner Dennis Gabor [57] (1900 - 1979), its application in modern signal processing is wide and varied [47]. The real and imaginary weights where initialized as described in (57) (The definition for real-valued weights, which is equivalent to multiplying the limits of Equation 60 by \(\sqrt{2}\)) and (60) (the original case for complex-valued weights) to compare them. An initialization that divided the limits of Equation 60 by two was also used to assert that smaller values will not produce a superior result either. The results shown in Figure 11 prove the importance of the correct adaptation of Glorot initialization to complex numbers and how failing to do so will impact its performance negatively. ### Experiment on different trade-offs Complex numbers enable choosing different trade-offs than the one chosen by [44] (Equation 56), for example, the following trade-off can also be chosen: \[\begin{split}\mathrm{Var}\left[\mathrm{Re}\left(\omega^{(l)} \right)\right]&=\frac{1}{2N_{l}}\,,\\ \mathrm{Var}\left[\mathrm{Im}\left(\omega^{(l)}\right)\right]& =\frac{1}{2N_{l+1}}\,,\end{split} \tag{70}\] between other options. In a similar manner, as we did with Glorot (Xavier) initialization, the He weight initialization described in [58] can be deduced for complex numbers. the same dataset as the one used in the experiment of Figure 11 was done using Glorot Uniform (\(GU\)), Glorot Normal (\(GN\)), He Normal (\(HN\)), He Uniform (\(HU\)), and Glorot Uniform using the trade-off defined in (70) (\(GU_{C}\)). \begin{table} \begin{tabular}{c c} \(f(t)\) & \(g(t)=\mathcal{H}(f)(t)\) \\ \hline \(\sin(t)\) & \(\cos(t)\) \\ \(1/\left(t^{2}+1\right)\) & \(t/\left(t^{2}+1\right)\) \\ \(\sin(t)/t\) & \(\left[1-\cos(t)\right]/t\) \\ \(\delta(t)\) & \(1/\pi t\) \\ \end{tabular} \end{table} Table 1: Examples of Hilbert transform pairs The results of Figure 12 show that the difference between using a normal or uniform distribution is negligible but that Glorot clearly outperforms He initialization. This is to be expected when using sigmoid activation function because He initialization was designed for ReLU activation functions. On the other hand, the trade-off proposed in (70) actually achieves lower performances than the one proposed in [44] for what it was not implemented in the cvnn toolbox, however, it is possible this results is only specific to this dataset and for what further research could be done with this trade-off variant. All the other initialization techniques are documented in complex-valued-neural-networks.rtfd.io/en/latest/initializers.html and implemented as previously described. They can be used in standalone mode as follows ``` 1importcvnn 2initializer=cvnn.initializers.GlorotUniform() 3values=initialize(shape=(2,2)) 4#ReturnsacomplexGlorotUniformtensorofshape(2,2) ``` or inside a layer using an initializer object like ``` 1importcvnn 2initializer=cvnn.initializers.ComplexGlorotUniform() ``` Figure 11: Comparison of Glorot Uniform initialization scaled by different values. Figure 12: Initialization Technique Comparison *layer=cvnn.layers.Dense(input_size=23,output_size=45,weight_initializer=initializer) ``` or as a string listed within init_dispatcher like ``` 1importcvnn 2layer=cvnn.layers.Dense(input_size=23,output_size=45,weight_initializer="ComplexGlorotUniform=") ``` withinit_dispatcher being ``` 1init_dispatcher={ 2"ComplexGlorotUniform='ComplexGlorotUniform, 3"ComplexGlorotNormal":ComplexGlorotNormal, 4"ComplexHeUniform='ComplexHeUniform, 5"ComplexHeNormal':ComplexHeNormal 6} ``` ## 8 Performance on real-valued data Using the same signals of previous Sections, some experiments were perform comparing Complex-Valued MultiLayer Perceptron (CV-MLP) against a Real-Valued MultiLayer Perceptron (RV-MLP). 16000 Chirp signals were created, 8000 linear and 8000 S-shaped. \(80\%\) was used for training, \(10\%\) for validation and the remaining \(10\%\) for testing. Two models (one CV-MLP and one RV-MLP) were designed and dimensioned as shown in Table 2. We used SGD as weight optimization and \(50\%\) dropout for both models. We performed 2000 iterations (1000 for each model) with 2000 epochs each. Figure 13 shows the mean loss per epoch of both training and validation set for CV-MLP (Figure 13a) and RV-MLP (Figure 13b). The Figures show that CV-MLP presents less overfitting than the real-valued model. A histogram of both models' accuracy and loss values on the test set was plotted and can be seen in Figure 14. It is clear that CV-MLP outperforms RV-MLP classification accuracy with around \(4\%\) higher accuracy. Regarding the loss, RV-MLP obtained higher variance. Finally, the simulations were performed for all seven signals obtaining similar results as before. This time, accuracy results have a higher difference with CVNN results not intersecting with the RVNN results. Again, RV-MLP had higher loss variance. ### Discussion It is important to note that these networks were not optimized. The main issue is the _softmax_ activation function used on the absolute value of the complex output. Although it is not generally used like this, in CVNN, it might be acceptable. However, for a RVNN, this is unconventional and penalizes their performance greatly. Furthermore, both models are not equivalent, as described in Reference [27], resulting in RVNN having a higher capacity, which may increase their performance but also may result in more overfitting. For all these reasons, the simulations must be revised. However, if the general conclusions stand, this might indicate that CVNN can outperform RVNN even for real-valued applications when using an appropriate transformation such as the Hilbert transform. Figure 14: Test set histogram metrics for binary classification on Chirp signals. Figure 15: Test set histogram metrics for multi-class classification for all 7 signals. ## 9 Conclusion In this paper, described in detail each adaptation from conventional real-valued neural networks to the complex domain in order to be able to implement Complex-Valued Neural Networks. Each aspect was revised, and the appropriate mathematics was developed. With this work it should be possible to understand and implement from a basic Complex-Valued MultiLayer Perceptron to a Complex-Valued Convolutional Neural Network and even Complex-Valued Fully Convolutional Neural Network or Complex-Valued U-Network (CV-UNET). We also detail the implementation of the published library, with examples of how to use the code and with reference to the documentation to be used if needed. We showed that the library had great success in the community through its increasing popularity. We performed simulations that verified the adaptation of the initialization technique and showed that correctly implementing this initialization is crucial for obtaining a good performance. Finally, we show that CVNNs might be of interest even for real data by using the Hilbert transformation contrary to the work of [5]. The classification improved around \(4\%\) when using a complex network over a real one. However, these last results should be revised as the models were not equivalent, and RVNN might have been overly penalized. ## Appendix A Complex Identities For further reading and properties of complex numbers, see Reference [59]. **Definition A.1** (Hermitian transpose).: Given \(\mathbf{A}\in\mathbb{C}^{n\times m},m,n\in\mathbb{N}^{+}\). The _Hermitian transpose_ of a matrix is defined as: \[\mathbf{A}^{H}=\overline{\mathbf{A}^{T}}\,. \tag{71}\] The upper line on \(\overline{A}\) refers to the conjugate value of a complex number, that is, a number with an equally real part and an imaginary part equal in magnitude but opposite in sign. **Definition A.2** (differential rule).: \[\partial f=\frac{\partial f}{\partial z}\partial z+\frac{\partial f}{\partial \overline{z}}\partial\overline{z}\,.\] (72) **Definition A.3** (conjugation rule).: \[\begin{split}\overline{\left(\frac{\partial f}{\partial z} \right)}&=\frac{\partial\overline{f}}{\partial\overline{z}}\,,\\ \overline{\left(\frac{\partial f}{\partial\overline{z}}\right)}& =\frac{\partial\overline{f}}{\partial z}\,.\end{split}\] (73) When \(f:\mathbb{C}\longrightarrow\mathbb{R}\) the expression can be simplified as: \[\begin{split}\overline{\left(\frac{\partial f}{\partial z} \right)}&=\frac{\partial f}{\partial\overline{z}}\\ \overline{\left(\frac{\partial f}{\partial\overline{z}}\right)}& =\frac{\partial f}{\partial z}\,.\end{split} \tag{74}\] **Theorem A.1**.: Given \(f(z):\mathbb{C}\longrightarrow\mathbb{C},z=x+iy\Rightarrow\exists u,v:\mathbb{ R}^{2}\longrightarrow\mathbb{R}\) such that \(f(z)=u(x,y)+i\,v(x,y)\) ### Complex differentiation rules Given \(f,g:\mathbb{C}\longrightarrow\mathbb{C}\) and \(c\in\mathbb{C}\): \[\begin{split}(f+g)^{\prime}(z_{0})&=f^{\prime}(z_{0} )+g^{\prime}(z_{0})\,,\\ (f\,g)^{\prime}(z_{0})&=f^{\prime}(z_{0})\,g(z_{0}) +f(z_{0})\,g^{\prime}(z_{0})\,,\\ \left(\frac{f}{g}\right)^{\prime}(z_{0})&=\frac{f^{ \prime}(z_{0})\,g(z_{0})-f(z_{0})\,g^{\prime}(z_{0})}{g^{2}(z_{0})},\ \ \ g(z_{0}) \neq 0\,,\\ (c\,f)^{\prime}(z_{0})&=c\,f^{\prime}(z_{0})\,. \end{split} \tag{75}\] ### Properties of the conjugate Given \(z,w\in\mathbb{C}\): \[\overline{z\pm w} =\overline{z}\pm\overline{w}\,, \tag{76}\] \[\overline{z\,w} =\overline{z}\,\overline{w}\,,\] \[\overline{\left(\frac{z}{w}\right)} =\frac{\overline{z}}{\overline{w}}\,,\] \[\overline{z^{n}} =\overline{z}^{n},\forall n\in\mathbb{C}\,,\] \[|z|^{2} =z\,\overline{z}\,,\] \[\overline{z} =z\,\overline{z}\,,\] \[\overline{\overline{z}} =z\,\left(\text{involution}\right),\] \[\overline{z} =z\Leftrightarrow z\in\mathbb{R}\,,\] \[z^{-1} =\frac{\overline{z}}{|z|^{2}},\forall z\neq 0\,.\] ### Holomorphic Function An _holomorphic_ function is a complex-valued function of one or more complex variables that is, at every point of its domain, complex differentiable in a neighborhood of the point. This condition implies an _holomorphic_ function is Class \(C^{\infty}\) (analytic). **Definition A.4**.: Given a complex function \(f:\mathbb{C}\longrightarrow\mathbb{C}\) at a point \(z_{0}\) of an open subset \(\Omega\subset\mathbb{C}\) is _complex-differentiable_ if exists a limit such as: \[f^{\prime}(z_{0})=\lim_{z\to z_{0}}\frac{f(z)-f(z_{0})}{z-z_{0}}\,. \tag{77}\] As stated before, if a function is complex-differentiable at all points of \(\Omega\) it is called _holomorphic_[5]. The relationship between real differentiability and complex differentiability is stated on the _Cauchy-Riemann equations_ **Theorem A.2** (Cauchy-Riemann equations).: Given \(f(x+i\,y)=u(x,y)+i\,v(x,y)\) where \(u,v:\mathbb{R}^{2}\longrightarrow\mathbb{R}\) real-differentiable functions, \(f\) is complex-differentiable if satisfies: \[\frac{\partial u}{\partial x} =\frac{\partial v}{\partial y}\,, \tag{78}\] \[-\frac{\partial u}{\partial y} =\frac{\partial v}{\partial x}\,.\] Proof.: Given \(f:\mathbb{C}\longrightarrow\mathbb{C}\): \[f(x+iy)=u(x,y)+i\,v(x,y)\,, \tag{79}\] with \(u,v:\mathbb{R}\longrightarrow\mathbb{R}\) real-differentiable functions. For \(f\) to be complex-differentiable, then: \[f^{\prime}(z)=\lim_{\Delta x\to 0}\frac{f(z+\Delta x)-f(z)}{\Delta x}=\lim_{i \Delta y\to 0}\frac{f(z+i\,\Delta y)-f(z)}{i\,\Delta y}\,. \tag{80}\] Replacing (80) with (79) and doing some algebra: \[f^{\prime}(z)=\frac{\partial u}{\partial x}+i\,\frac{\partial v}{\partial x} =\frac{\partial v}{\partial y}-i\,\frac{\partial u}{\partial y}\,. \tag{81}\] By comparing real and imaginary parts from the latest function, the Cauchy-Riemann equations are evident. ### Chain Rule **Theorem A.3** (real multivariate chain rule with complex variable).: Given \(f:\mathbb{R}^{2}\longrightarrow\mathbb{R},z\in\mathbb{C}\) and \(x(z),y(z):\mathbb{C}\longrightarrow\mathbb{R}\) \[\frac{\partial f}{\partial z}=\frac{\partial f}{\partial x}\frac{\partial x}{ \partial z}+\frac{\partial f}{\partial y}\frac{\partial y}{\partial z}\,. \tag{82}\] This chain rule is analogous to that of the multivariate chain rule with real values. The need for stating this case arises from a demonstration that will be done for Theorem 87. **Theorem A.4** (complex chain rule over real and imaginary part).: Given \(f:\mathbb{C}\longrightarrow\mathbb{C},z\in\mathbb{C}\) and \(x(z),y(z):\mathbb{C}\longrightarrow\mathbb{R}\) \[\frac{\partial f}{\partial z}=\frac{\partial f}{\partial x}\frac{\partial x}{ \partial z}+\frac{\partial f}{\partial y}\frac{\partial y}{\partial z}\,. \tag{83}\] Proof.: By using theorem A.1, we can write \(f=u+i\,v\) so that: \[\frac{\partial f}{\partial z}=\frac{\partial u}{\partial z}+i\,\frac{\partial v }{\partial z}\,. \tag{84}\] Using theorem A.3 we can apply the chain rule with \(u\) and \(v\): \[\frac{\partial f}{\partial z}=\left[\frac{\partial u}{\partial x}\frac{ \partial x}{\partial z}+\frac{\partial u}{\partial y}\frac{\partial y}{ \partial z}\right]+i\left[\frac{\partial v}{\partial x}\frac{\partial x}{ \partial z}+\frac{\partial v}{\partial y}\frac{\partial y}{\partial z}\right]\,. \tag{85}\] Rearranging the terms leads to: \[\begin{split}\frac{\partial f}{\partial z}&=\left[ \frac{\partial u}{\partial x}\frac{\partial x}{\partial z}+i\,\frac{\partial v }{\partial x}\frac{\partial x}{\partial z}\right]+\left[\frac{\partial u}{ \partial y}\frac{\partial y}{\partial z}+i\,\frac{\partial v}{\partial y} \frac{\partial y}{\partial z}\right],\\ &=\left[\frac{\partial u}{\partial x}+i\,\frac{\partial v}{ \partial x}\right]\frac{\partial x}{\partial z}+\left[\frac{\partial u}{ \partial y}+i\,\frac{\partial v}{\partial y}\right]\frac{\partial y}{ \partial z}\,,\\ &=\left[\frac{\partial u+i\,\partial v}{\partial x}\right]\frac{ \partial x}{\partial z}+\left[\frac{\partial u+i\,\partial v}{\partial y} \right]\frac{\partial y}{\partial z}\,,\\ &=\left[\frac{\partial(u+iv)}{\partial x}\right]\frac{\partial x }{\partial z}+\left[\frac{\partial(u+iv)}{\partial y}\right]\frac{\partial y}{ \partial z}\,,\\ &=\frac{\partial f}{\partial x}\frac{\partial x}{\partial z}+ \frac{\partial f}{\partial y}\frac{\partial y}{\partial z}\,.\end{split} \tag{86}\] **Corollary A.4.1**.: See that the proof of Theorem A.3 is indistinct weather \(z\in\mathbb{C}\) or \(z\in\mathbb{R}\), so that both (A.3) and (A.4) are also valid for \(z\in\mathbb{R}\) **Definition A.5** (complex chain rule).: Given \(h,g:\mathbb{C}\longrightarrow\mathbb{C},z\in\mathbb{C}\). The chain rule in complex numbers is now given by: \[\begin{split}\frac{\partial h(g)}{\partial z}&=\frac {\partial h}{\partial g}\frac{\partial g}{\partial z}+\frac{\partial h}{ \partial g}\frac{\partial g}{\partial z}\\ \frac{\partial h(g)}{\partial z}&=\frac{\partial h}{ \partial g}\frac{\partial g}{\partial z}+\frac{\partial h}{\partial g}\frac{ \partial g}{\partial z}\,.\end{split} \tag{87}\] Proof.: As we make no assumption of \(h\) or \(g\) being _holomorphic_, we will use Wirtinger calculus (10) (having in mind that _holomorphic_ functions are special cases of the later): \[\frac{\partial\left(f\circ g\right)}{\partial z}=\frac{1}{2}\left(\frac{ \partial\left(f\circ g\right)}{\partial z_{\rm Re}}-i\,\frac{\partial\left(f \circ g\right)}{\partial z_{\rm Im}}\right)\,. \tag{88}\] Using (A.4.1): \[\frac{\partial\left(f\circ g\right)}{\partial z} =\frac{1}{2}\left(\left(\frac{\partial f}{\partial g_{\mathrm{Re}}} \frac{\partial g_{\mathrm{Re}}}{\partial z_{\mathrm{Re}}}+\frac{\partial f}{ \partial g_{\mathrm{Im}}}\frac{\partial g_{\mathrm{Im}}}{\partial z_{\mathrm{Re }}}\right)-i\left(\frac{\partial f}{\partial g_{\mathrm{Re}}}\frac{\partial g_ {\mathrm{Re}}}{\partial z_{\mathrm{Im}}}+\frac{\partial f}{\partial g_{ \mathrm{Im}}}\frac{\partial g_{\mathrm{Im}}}{\partial z_{\mathrm{Im}}}\right) \right), \tag{89}\] \[=\frac{1}{4}\left(\frac{\partial f}{\partial g_{\mathrm{Re}}} \frac{\partial(g+\overline{g})}{\partial z_{\mathrm{Re}}}-i\,\frac{\partial f }{\partial g_{\mathrm{Im}}}\frac{\partial(g-\overline{g})}{\partial z_{ \mathrm{Re}}}\right)\] (90) \[\qquad\qquad-i\,\frac{1}{4}\left(\frac{\partial f}{\partial g_{ \mathrm{Re}}}\frac{\partial(g+\overline{g})}{\partial z_{\mathrm{Im}}}-i\, \frac{\partial f}{\partial g_{\mathrm{Im}}}\frac{\partial(g-\overline{g})}{ \partial z_{\mathrm{Im}}}\right)\] \[=\frac{1}{4}\frac{\partial f}{\partial g_{\mathrm{Re}}}\left( \frac{\partial(g+\overline{g})}{\partial z_{\mathrm{Re}}}-i\,\frac{\partial(g +\overline{g})}{\partial z_{\mathrm{Im}}}\right)\] (91) \[\qquad\qquad-\frac{1}{4}\frac{\partial f}{\partial g_{\mathrm{Im }}}\left(i\,\frac{\partial(g-\overline{g})}{\partial z_{\mathrm{Re}}}+\frac{ \partial(g-\overline{g})}{\partial z_{\mathrm{Im}}}\right).\] Using Wirtinger calculus definition again (Equation 10): \[\frac{\partial\left(f\circ g\right)}{\partial z}=\frac{1}{2}\left(\frac{ \partial f}{\partial g_{\mathrm{Re}}}\frac{\partial(g+\overline{g})}{\partial z }-i\frac{\partial f}{\partial g_{\mathrm{Im}}}\frac{\partial(g-\overline{g}) }{\partial z}\right). \tag{92}\] Using (2.4.1): \[\frac{\partial\left(f\circ g\right)}{\partial z} =\frac{1}{2}\left(\left(\frac{\partial f}{\partial g}+\frac{ \partial f}{\partial\overline{g}}\right)\frac{\partial(g+\overline{g})}{ \partial z}+\left(\frac{\partial f}{\partial g}-\frac{\partial f}{\partial \overline{g}}\right)\frac{\partial(g-\overline{g})}{\partial z}\right), \tag{93}\] \[=\frac{1}{2}\left(\frac{\partial f}{\partial g}\left(\frac{ \partial(g+\overline{g})}{\partial z}+\frac{\partial(g-\overline{g})}{ \partial z}\right)+\frac{\partial f}{\partial\overline{g}}\left(\frac{ \partial(g+\overline{g})-\partial(g-\overline{g})}{\partial z}\right)\right),\] \[=\frac{1}{2}\left(\frac{\partial f}{\partial g}\left(\frac{ \partial(g+\overline{g})+\partial(g-\overline{g})}{\partial z}\right)+\frac{ \partial f}{\partial\overline{g}}\left(\frac{\partial(g+\overline{g}-g+ \overline{g})}{\partial z}\right)\right),\] \[=\left(\frac{\partial f}{\partial g}\frac{\partial g}{\partial z }+\frac{\partial f}{\partial\overline{g}}\frac{\partial\overline{g}}{ \partial z}\right)\,.\] ## Appendix B Real Valued Backpropagation To learn how to optimize the weight of \(\omega_{ij}^{(l)}\), it is necessary to find the partial derivative of the loss function for a given weight. We will use the chain rule as follows: \[\frac{\partial\mathcal{L}}{\partial\omega_{ij}^{(l)}}=\sum_{n=1}^{N_{L}}\frac{ \partial e_{n}}{\partial X_{n}^{(L)}}\frac{\partial X_{n}^{(L)}}{\partial V_{n }^{(L)}}\frac{\partial V_{n}^{(L)}}{\partial\omega_{ij}^{(l)}}\,. \tag{94}\] Given these three terms inside the addition, we could find the value we are looking for. The first two partial derivatives are trivial as \(\partial e_{n}(d_{n},y_{n})/\partial y_{n}\) exists and is no zero and \(\partial X_{n}^{(L)}/\partial V_{n}^{(L)}\) is the derivate of \(\sigma\). With regard to \(\partial V_{n}^{(L)}/\partial\omega_{ij}^{(l)}\), when the layer is the same for both values (\(l=L\)), the definition is trivial and the following result is obtained: \[\frac{\partial V_{i}^{(l)}}{\partial\omega_{ij}^{(l)}} =\frac{\partial\left(\sum_{j}\omega_{ij}^{(l)}X_{j}^{(l-1)} \right)}{\partial\omega_{ij}^{(l)}}=\sum_{j}\frac{\partial\left(\omega_{ij}^{(l )}X_{j}^{(l-1)}\right)}{\partial\omega_{ij}^{(l)}}\,, \tag{95}\] \[=\begin{cases}0&\mathrm{j}\neq j\\ X_{j}^{(l-1)}&\mathrm{j}=j\end{cases}\,,\] \[=X_{j}^{(l-1)}\,.\] Note the subtle difference between \(\mathrm{j}\) and \(j\). We will now define the cases where the weight and \(V_{n}\) are not from the same layer. For given \(h,l\in[0,L]\), with \(h\leq l-2\) we can define the derivative as follows: \[\begin{split}\frac{\partial V_{n}^{(l)}}{\partial\omega_{jk}^{(l) }}&=\frac{\partial\left(\sum_{i}\omega_{ni}^{(l)}X_{i}^{(l-1)} \right)}{\partial\omega_{jk}^{(h)}}\,,\\ &=\sum_{i}^{N_{l-1}}\omega_{ni}^{(l)}\frac{\partial X_{i}^{(l-1)} }{\partial\omega_{jk}^{(h)}}\,,\\ &=\sum_{i}^{N_{l-1}}\omega_{ni}^{(l)}\frac{\partial X_{i}^{(l-1)} }{\partial V_{i}^{(l-1)}}\frac{\partial V_{i}^{(l-1)}}{\partial\omega_{jk}^{( h)}}\,.\end{split} \tag{96}\] Using Equations 95 and 96, we have all cases except for \(h=l-1\) which will be: \[\begin{split}\frac{\partial V_{n}^{(l)}}{\partial\omega_{jk}^{( l-1)}}&=\frac{\partial\left(\sum_{\mathrm{j}}\omega_{nj}^{(l)}X_{j}^{(l-1)} \right)}{\partial\omega_{jk}^{(l-1)}}\,,\\ &=\sum_{\mathrm{j}}^{N_{l-1}}\omega_{nj}^{(l)}\frac{\partial X_{ j}^{(l-1)}}{\partial\omega_{jk}^{(l-1)}}\,,\\ &=\omega_{nj}^{(l)}\frac{\partial X_{j}^{(l-1)}}{\partial V_{j}^ {(l-1)}}\frac{\partial V_{j}^{(l-1)}}{\partial\omega_{jk}^{(l-1)}}\,.\end{split} \tag{97}\] Using the result from (95) we can get a final result for (97). To sum up, the derivative can be written as follows: \[\begin{split}\frac{\partial V_{n}^{(l)}}{\partial\omega_{jk}^{( l)}}&=\begin{cases}X_{j}^{(l-1)}&h=l\,,\\ \omega_{nj}^{(l)}\frac{\partial X_{j}^{(l-1)}}{\partial V_{j}^{(l-1)}}\frac{ \partial V_{j}^{(l-1)}}{\partial\omega_{jk}^{(l-1)}}&h=l-1\,,\\ \sum_{i}^{N_{l-1}}\omega_{ni}^{(l)}\frac{\partial X_{i}^{(l-1)}}{\partial V_{ i}^{(l-1)}}\frac{\partial V_{i}^{(l-1)}}{\partial\omega_{jk}^{(h)}}&h \leq l-2\,.\end{cases}\end{split} \tag{98}\] Equation 94 can then be solved applying (98) iteratively to reduce the exponent \(L\) to the desired value \(l\). Note that \(l\leq L\) and \(L>0\). ### Benvenuto and Piazza definition In Reference [60], another recursive definition for the backpropagation algorithm is defined: \[e_{n}^{(l)}=\begin{cases}e_{n}&l=L\,,\\ \sum_{q=1}^{N_{l+1}}\omega_{qn}^{(l+1)}\delta_{q}^{(l+1)}&l<L\,,\end{cases} \tag{99}\] with \(\delta_{n}^{(l)}=e_{n}^{(l)}\sigma^{\prime}(V_{n}^{(l)})\). Then the derivation is defined recursively as: \[\frac{\partial\mathcal{L}}{\partial\omega_{ij}^{(l)}}=\sum_{n}^{N_{\bar{L}}} \delta_{n}^{(l)}X_{m}^{(l-1)}\,. \tag{100}\] It can be proven that (94) and (100) are equivalent. Automatic Differentiation This topic is also very well covered in Appendix D of [21]. The appendix also covers manual differentiation, symbolic differentiation, and numerical differentiation. In this Section, we will jump directly to forward-mode automatic differentiation (autodiff) and reverse-mode autodiff. There many approaches to explain automatic differentiation (autodiff) [6]. Most cases tend to assume the derivative at a given point exists which is a logical assumption having in mind the algorithm is computing the derivative itself. However, we have seen that for our case we can soften this definition and only ask for the _Wirtinger calculus_ to exist. Furthermore, the requirement that the derivative at a point exists is a very strong condition that is to be avoided to compute complex number backpropagation. Even in the real domain, there are examples like Rectified Linear Unit (ReLU) that are widely used for deep neural networks that have no derivative at \(x=0\), and yet the backpropagation is applied without a problem. * [61] explains autodiff it in a very clear and concise manner by defining the _dual number_ (to be explained later in this Chapter) arithmetic directly but assumes that the derivative in the point exists. * [62] is one of the few that actually talks about complex number autodiff but assumes that Cauchy-Riemann equations are valid. * [63] presents _Taylor series_ as the main idea behind forward-mode autodiff. However, _Taylor series_ assumes that the function is infinitely derivable at the desired point. * [64] assumes the function is differentiable. ### Forward-mode automatic differentiation In this Section, we will demonstrate the theory behind forward-mode autodiff in a general way, and we will not ask for \(f\) to be infinitely derivable at a point furthermore, we will not even require it to have a first derivative. We will later extend this definition to the complex domain. To the author's knowledge, this demonstration is not present in any other Reference, although the high quantity of bibliography on this subject may suggest otherwise. After defining forward-mode autodiff we will proceed to explain reverse-mode autodiff. The definition of the derivative of function \(f\) is given by: \[f^{\prime}(x)=\lim_{h\to 0}\frac{f(x+h)-f(x)}{h}\,. \tag{101}\] We will generalize this equation for only a one-sided limit. The choice of right- or left-sided limit will be indifferent to the demonstration: \[Df(x)=\lim_{h\to 0^{\pm}}\frac{f(x+h)-f(x)}{h}\,, \tag{102}\] where \(Df\) stands for this "soften" derivative definition and \(\pm\) stands for either left (-) or right (+) sided limit. In Equation 102, we have "soften" the condition needed for the derivative. In cases where the derivative of \(f\) exists, we will not need to worry because (102) will converge to (101) hence being equivalent. However, in cases where the derivative does not exist, because the left side limit does not converge to the left side limit (like ReLU at \(x=0\)), this definition will render a result. From the above definition, we have: \[\begin{split} Df(x)\lim_{h\to 0^{\pm}}h&=\lim_{h\to 0^{ \pm}}f(x+h)-f(x)\\ \lim_{h\to 0^{\pm}}f(x+h)&=f(x)+Df(x)\lim_{h\to 0^{\pm}}h\,. \end{split} \tag{103}\] We can now define \(\lim_{h\to 0}=\epsilon\). In this case, \(\epsilon\) will be an infinitesimally small number. The sign of \(\epsilon\) will depend on the side of the limit chosen, for example, if \(\epsilon=\lim_{h^{\pm}\to 0}\) then \(\epsilon\) can be \(-0.00..001\) whereas if \(\epsilon=\lim_{h^{\pm}\to 0}\epsilon\) can be \(0.00..02\). Note that the last digit of \(\epsilon\) can be anything; it can even be more than one digit as long as it is preceded by an "infinite" number of zeros. With this definition, the above equation can now be written as: \[f(x+\epsilon)=f(x)+Df(x)\,\epsilon\,. \tag{104}\] Given a number \(x=a+b\epsilon\), forward-mode automatic differentiation writes this number a tuple of two numbers in the form of \(x=(a,b)\) called _dual numbers_. _Dual numbers_ are represented in memory as a pair of floats. An arithmetic for this newly defined _dual numbers_ are described in detail in [61]. We can think of dual numbers as a transformation \(T_{\epsilon}[x]:\mathbb{R}\to\mathbb{R}^{2}/T_{\epsilon}[a+b\,\epsilon]=(a,b)\)[65]. The real number system maps isomorphically into this new space by the mapping \(x\mapsto(x,0),x\in\mathbb{R}\). To use the same notation as [61], we will call the _dual number_ space \(\mathbb{D}\), although the transform definition explained here is slightly different and so, space \(\mathbb{D}\) is not equivalent as space \(\mathbb{D}\) from [61]. Operations in this space can be easily defined, for example: \[\lambda(a,b) =(\lambda\,a,\lambda\,b)\,, \tag{105}\] \[(a,b)+(c,d) =(a+b,c+d)\,,\] (106) \[(a,b)\,(c,d) =(a\,c,(a\,d+b\,c)+b\,d\,\epsilon)=(a\,c,a\,d+b\,c)\,. \tag{107}\] See that in Equation 107, we have approximated \((a\,c,(a\,d+b\,c)+b\,d\,\epsilon)=(a\,c,a\,d+b\,c)\). This is evident because the second term of the dual number is stored in memory as a float. Being \(b\,d\) a product of two bounded scalar values and being \(\epsilon\), by definition, an infinitesimal number, the value yielded by the term \(b\,d\,\epsilon\) will tend to zero and fall under the machine epsilon (machine precision). The existence of the additive inverse (or negative) and the identity element for addition can be easily defined. So is the case for multiplication. The multiplication and addition are commutative and associative. Basically, the space \(\mathbb{D}\) is well defined [61]. The choice of \(\epsilon\) will only affect how the transformation \(T_{\epsilon}\) is applied but will not affect this demonstration nor the basic operations of the space \(\mathbb{D}\). If we rewrite Equation 104 in _dual number_ notation we have: \[f\left((a,b)\right)=(f(a),b\,Df(a)[\epsilon])\,\,. \tag{108}\] Equation 108 means that if we find a way to compute the function we want to derivative using dual number notation, the result will yield a dual number whose first number is the result of the function at point \(a\) and the second number will be the derivative (provided \(b=1\)) at that same point. Note that the above equation has widened the "soft" derivative definition to \(Df(x)[\epsilon]\) which means the derivative of \(f\) at point \(x\) on the \(\epsilon\) direction, in this case, either left or right-sided limit. The strength of forward-mode automatic differentiation is that this result is exact [62, 6]. This is, the only inaccuracies which occur are those which appear due to rounding errors in floating-point arithmetic or due to imprecise evaluations of elementary functions. If we had infinite float number precision and we could define the exact value of \(f\) on the _dual number_ base, we will have the exact value of the derivative. Another virtue of forward-mode autodiff is that \(f\) can be any coded function whose symbolic equation is unknown. Forward-mode autodiff can then compute any number of nested functions as long as the basic operations are well defined in _dual number_ form. Then if \(f\) does many calls to these basic operations inside loops or any conditional call (making it difficult to derive the symbolic equation), it will still be possible for forward-mode autodiff to compute its derivative without any difficulty. #### c.1.1 Examples With the goal of helping the reader to better understand forward-mode autodiff, we will see the example used in [21], we define \(f(x,y)=x^{2}y+y+2\). For it we compute \(f(x+\epsilon,y)=42+24\,\epsilon\) following the logic of Figure 16. Therefore, we conclude that \(\dfrac{\partial f}{\partial x}(3,4)=24\). **Rectified Linear Unit (ReLU)** In another example, a very well-known function called Rectified Linear Unit (ReLU), is one of the most used activation functions in machine learning models. We can write ReLU as \(f(x)=\max(0,x)\) for what it's derivative will be defined as: \[f^{\prime}(x)=\begin{cases}1&x>0\\ 0&x<0\,.\end{cases} \tag{109}\] This is the example where: \[\lim_{h\to 0^{+}}\frac{f(x+h)-f(x)}{h}=1\neq\lim_{h\to 0^{-}}\frac{f(x+h)-f(x)} {h}=0\,. \tag{110}\] Therefore, its derivative is not defined at \(x=0\). The theory of why this case doesn't pose any problem for reaching an optimal point when doing backpropagation is outside the scope of this text. However, we will see what the forward-mode automatic differentiation gives as a result. For it, we should first define, as usual, \(f\) in the _dual number_ space: \[f\left((a,b)\right)=\begin{cases}(a,b)&a>0\\ (0,b)&a=0\\ (0,0)&a<0\,.\end{cases} \tag{111}\] This definition is logical if we have chosen \(\lim\limits_{h\,\ast\,0}=\epsilon\) and must be changed if the left-sided limit is chosen. We therefore compute the value of \(f(0,1)\) to see the value of the derivative at point \(x=0\): \[\begin{split} f(x+\epsilon)&=\max(0,0.00...01)=0.00...01 =(0,1)=\left(f(0),f^{\prime}(0)\right),\\ f\left((0,1)\right)&=(0,1)\,.\end{split} \tag{112}\] We can see in Equation 112 that the forward-mode autodiff algorithm will yield the result of \(Df(0)[\epsilon]=1\) which is an acceptable result for this case. #### c.1.2 Complex forward-mode automatic differentiation With the generalization to the complex case, \(\epsilon\) now becomes complex, in the real case, we arbitrarily chose either the left or right-sided limit. Now, limitless directions are possible (phase), affecting, of course, the result of the derivative. Table 3 shows the derivative that is calculated when different values of \(\epsilon\) are chosen. The Table can be generalized to the following equation: \[\nabla_{\epsilon}f=\lim\limits_{h\to 0}\frac{f(z+h\,\epsilon)-f(z)}{h\, \epsilon}\,. \tag{113}\] It is important to note that the module of \(\epsilon\) is unimportant as long as it is small enough so that the analysis made in the last Section (Section C.1) stands. Figure 16: Forward-mode autodiff block diagram example For functions where the complex derivative exists, all possibilities will converge to the same value. The first and second terms on Table 3 are used in the computation of _Wirtinger Calculus_, and it is not necessarily equal to the third term of the Table. ### Reverse-mode automatic differentiation Forward-mode automatic differentiation has many useful properties. First of all, as we have seen, the condition for the derivative to exist is not necessarily required for the method to yield a result that can be helpful when dealing with, in our case, non-holomorphic functions. Also, it allows finding a derivative value for any coded function, even containing loops or conditionals, as long as the primitives are defined. It is also natural for the algorithm to deal with the chain rule. However, by taking a look at the example of Figure 16, if we now want to compute \(\frac{\partial f}{\partial y}(3,4)\) we will need to compute \(f(x,y+\epsilon)\), meaning that in order to know both \(\frac{\partial f}{\partial x}(3,4)\) and \(\frac{\partial f}{\partial y}(3,4)\) we will need to compute the algorithm twice. This can be exponentially costly for neural networks where there are sometimes even millions of trainable parameters from which we need to compute the partial derivative. Here is where reverse-mode automatic differentiation comes to the rescue enabling us to compute all partial derivatives at once. #### c.2.1 Examples In here we will show how reverse-mode autodiff can compute the previous both \(\frac{\partial f}{\partial x}(3,4)\) and \(\frac{\partial f}{\partial y}(3,4)\) of the previous example for forward-mode autodiff running the code once. The first step is to compute \(f(3,4)\), whose intermediate values are shown on the bottom right of each node. Each node was labeled as \(n_{i}\) for clarity with \(i\in[1,7]\). The output of \(f(3,4)=n_{7}=42\) as expected. Now all the partial derivatives \(\frac{\partial f}{\partial n_{i}}\) are computed starting with \(n_{7}\). Since \(n_{7}\) is the output node, \(\frac{\partial f}{\partial n_{7}}=1\). The chain rule is then used to compute the rest of the nodes by going down the graph. For example, to compute \(\frac{\partial f}{\partial n_{5}}\) we use \[\frac{\partial f}{\partial n_{5}}=\frac{\partial f}{\partial n_{7}}\frac{ \partial n_{7}}{\partial n_{5}}\,, \tag{114}\] and as we previously calculated \(\frac{\partial f}{\partial n_{7}}\), we just need to compute the second term. This methodology will be repeated until all nodes' partial derivatives are computed. Here is the full list of the partial derivatives: * \(\frac{\partial f}{\partial n_{7}}=1\), * \(\frac{\partial f}{\partial n_{6}}=\frac{\partial f}{\partial n_{7}}\frac{ \partial n_{7}}{\partial n_{6}}=1\), * \(\frac{\partial f}{\partial n_{5}}=\frac{\partial f}{\partial n_{7}}\frac{ \partial n_{7}}{\partial n_{5}}=1\), \begin{table} \begin{tabular}{||c|c||} \hline Definition & One possibility for \(\epsilon\) \\ \hline \hline \(\lim\limits_{x\to 0}\frac{f(z+x)-f(z)}{x}\) & \(0.00...01\) \\ \hline \(\lim\limits_{y\to 0}\frac{f(z+i\,y)-f(z)}{y}\) & \(0.00...01\,i\) \\ \hline \(\lim\limits_{x\to 0,y\to 0}\frac{f(z+x-i\,y)-f(z)}{x+y}\) & \(0.00...01\,(1-i)\) \\ \hline \(\lim\limits_{x\to 0,y\to 0}\frac{f(z+x+i\,y)-f(z)}{x+y}\) & \(0.00...01\,(1+i)\) \\ \hline \end{tabular} \end{table} Table 3: Directional derivatives with respect to \(\epsilon\). * \(\frac{\partial f}{\partial n_{4}}=\frac{\partial f}{\partial n_{5}}\frac{\partial n _{5}}{\partial n_{4}}=n_{2}=4\), * \(\frac{\partial f}{\partial y}=\frac{\partial f}{\partial n_{2}}=\frac{\partial f }{\partial n_{5}}\frac{\partial n_{5}}{\partial n_{2}}+\frac{\partial f}{ \partial n_{6}}\frac{\partial n_{6}}{\partial n_{2}}=n_{4}+1=10\) * \(\frac{\partial f}{\partial x}=\frac{\partial f}{\partial n_{1}}=\frac{\partial f }{\partial n_{5}}\frac{\partial n_{5}}{\partial n_{1}}+\frac{\partial f}{ \partial n_{5}}\frac{\partial n_{5}}{\partial n_{1}}=n_{1}\cdot n_{4}+n_{1} \cdot n_{4}=3\cdot 4+3\cdot 4=24\). This code has the advantage that all partial derivatives are computed at once which allows computing the values for a very high number of trainable parameters with less computational cost. #### c.2.2 Complex reverse-mode automatic differentiation In the following complex example, we will compute the reverse automatic differentiation on a complex multiplication operation \(f=|(a+i\,b)(c+i\,d)|=a\,c-b\,d+a\,d+b\,c\), we know by definition that the derivative, using _Wirtinger Calculus_, is: \[\frac{\partial f}{\partial(a+i\,b)} =\frac{\partial f}{\partial a}+i\,\frac{\partial f}{\partial b}= (c+d)+i\,(c-d)\,, \tag{115}\] \[\frac{\partial f}{\partial(c+i\,d)} =\frac{\partial f}{\partial c}+i\,\frac{\partial f}{\partial d}= (a+b)+i\,(a-b)\,. \tag{116}\] The following Figure 18 shows the block diagram of \(f\). Using the block diagram of Figure 18 we can compute the nodes as follows: * \(n_{1}=a\), * \(n_{2}=b\), Figure 17: Reverse-mode autodiff block diagram example * \(n_{3}=c\), * \(n_{4}=d\), * \(n_{5}=n_{1}\cdot n_{3}=a\cdot c\), * \(n_{6}=n_{2}\cdot n_{4}=b\cdot d\), * \(n_{7}=n_{1}\cdot n_{4}=a\cdot d\), * \(n_{8}=n_{2}\cdot n_{3}=b\cdot c\), * \(n_{9}=n_{5}-n_{6}=a\,c-b\,d\), * \(n_{10}=n_{7}+n_{8}=a\,d+b\,c\), * \(f=n_{11}=n_{10}+n_{9}=a\,c-b\,d+a\,d+b\,c\). We now perform the reverse-mode backpropagation starting from the last node (\(n_{11}\)) and go back using the previously computed values to extract the partial derivative of \(f=n_{11}\) with respect to every node. * \(\frac{\partial f}{\partial n_{11}}=\frac{\partial n_{11}}{\partial n_{11}}=1\), * \(\frac{\partial n_{11}}{\partial n_{10}}=1\), * \(\frac{\partial n_{11}}{\partial n_{9}}=1\), Figure 18: Complex reverse-mode autodiff block diagram example * \(\frac{\partial n_{11}}{\partial n_{8}}=\frac{\partial n_{11}}{\partial n_{10}} \frac{\partial n_{10}}{\partial n_{8}}=1\), * \(\frac{\partial n_{11}}{\partial n_{7}}=\frac{\partial n_{11}}{\partial n_{10}} \frac{\partial n_{10}}{\partial n_{7}}=1\), * \(\frac{\partial n_{11}}{\partial n_{6}}=\frac{\partial n_{11}}{\partial n_{9}} \frac{\partial n_{9}}{\partial n_{6}}=-1\), * \(\frac{\partial n_{11}}{\partial n_{5}}=\frac{\partial n_{11}}{\partial n_{9}} \frac{\partial n_{9}}{\partial n_{5}}=1\), * \(\frac{\partial f}{\partial d}=\frac{\partial n_{11}}{\partial n_{4}}=\frac{ \partial n_{11}}{\partial n_{7}}\frac{\partial n_{7}}{\partial n_{4}}+\frac{ \partial n_{11}}{\partial n_{6}}\frac{\partial n_{6}}{\partial n_{4}}=n_{1}-n _{2}=a-b\), * \(\frac{\partial f}{\partial c}=\frac{\partial n_{11}}{\partial n_{3}}=\frac{ \partial n_{11}}{\partial n_{8}}\frac{\partial n_{8}}{\partial n_{3}}+\frac{ \partial n_{11}}{\partial n_{5}}\frac{\partial n_{5}}{\partial n_{3}}=n_{2}+n _{1}=b+a\), * \(\frac{\partial f}{\partial b}=\frac{\partial n_{11}}{\partial n_{2}}=\frac{ \partial n_{11}}{\partial n_{6}}\frac{\partial n_{6}}{\partial n_{2}}+\frac{ \partial n_{11}}{\partial n_{8}}\frac{\partial n_{8}}{\partial n_{2}}=n_{3}-n _{4}=c-d\), * \(\frac{\partial f}{\partial a}=\frac{\partial n_{11}}{\partial n_{1}}=\frac{ \partial n_{11}}{\partial n_{5}}\frac{\partial n_{5}}{\partial n_{1}}+\frac{ \partial n_{7}}{\partial n_{1}}\frac{\partial n_{7}}{\partial n_{1}}=n_{3}+n _{4}=c+d\). Now, if we consider \(\partial f/\partial(a+i\,b)=\partial f/\partial a+i\,\partial f/\partial b\) and replace the values we obtained in the previous list, we get the same result as in Equation 115.
2305.15270
Reversible Graph Neural Network-based Reaction Distribution Learning for Multiple Appropriate Facial Reactions Generation
Generating facial reactions in a human-human dyadic interaction is complex and highly dependent on the context since more than one facial reactions can be appropriate for the speaker's behaviour. This has challenged existing machine learning (ML) methods, whose training strategies enforce models to reproduce a specific (not multiple) facial reaction from each input speaker behaviour. This paper proposes the first multiple appropriate facial reaction generation framework that re-formulates the one-to-many mapping facial reaction generation problem as a one-to-one mapping problem. This means that we approach this problem by considering the generation of a distribution of the listener's appropriate facial reactions instead of multiple different appropriate facial reactions, i.e., 'many' appropriate facial reaction labels are summarised as 'one' distribution label during training. Our model consists of a perceptual processor, a cognitive processor, and a motor processor. The motor processor is implemented with a novel Reversible Multi-dimensional Edge Graph Neural Network (REGNN). This allows us to obtain a distribution of appropriate real facial reactions during the training process, enabling the cognitive processor to be trained to predict the appropriate facial reaction distribution. At the inference stage, the REGNN decodes an appropriate facial reaction by using this distribution as input. Experimental results demonstrate that our approach outperforms existing models in generating more appropriate, realistic, and synchronized facial reactions. The improved performance is largely attributed to the proposed appropriate facial reaction distribution learning strategy and the use of a REGNN. The code is available at https://github.com/TongXu-05/REGNN-Multiple-Appropriate-Facial-Reaction-Generation.
Tong Xu, Micol Spitale, Hao Tang, Lu Liu, Hatice Gunes, Siyang Song
2023-05-24T15:56:26Z
http://arxiv.org/abs/2305.15270v3
Reversible Graph Neural Network-based Reaction Distribution Learning for Multiple Appropriate Facial Reactions Generation ###### Abstract Generating facial reactions in a human-human dyadic interaction is complex and highly dependent on the context since more than one facial reactions can be appropriate for the speaker's behaviour. This has challenged existing machine learning (ML) methods, whose training strategies enforce models to reproduce a specific (not multiple) facial reaction from each input speaker behaviour. This paper proposes the first multiple appropriate facial reaction generation framework that re-formulates the _one-to-many mapping_ facial reaction generation problem as a _one-to-one mapping_ problem. This means that we approach this problem by considering the generation of a distribution of the listener's appropriate facial reactions instead of multiple different appropriate facial reactions, i.e.,'many' appropriate facial reaction labels are summarised as 'one' distribution label during training. Our model consists of a perceptual processor, a cognitive processor, and a motor processor. The motor processor is implemented with a novel Reversible Multi-dimensional Edge Graph Neural Network (REGNN). This allows us to obtain a distribution of appropriate real facial reactions during the training process, enabling the cognitive processor to be trained to predict the appropriate facial reaction distribution. At the inference stage, the REGNN decodes an appropriate facial reaction by using this distribution as input. Experimental results demonstrate that our approach outperforms existing models in generating more appropriate, realistic, and synchronized facial reactions. The improved performance is largely attributed to the proposed appropriate facial reaction distribution learning strategy and the use of a REGNN. The code is available at [https://github.com/TongXu-05/REGNN-Multiple-Appropriate-Facial-Reaction-Generation](https://github.com/TongXu-05/REGNN-Multiple-Appropriate-Facial-Reaction-Generation). Multiple appropriate facial reaction generation, Reversible Graph Neural Network, Facial reaction distribution learning ## I Introduction Non-verbal behaviour interaction plays a key role in human-human communication [1], with facial reactions providing important cues for understanding each other's emotional states. In dyadic interactions, a facial reaction refers to the **listener**'s non-verbal facial behaviours in response to the **speaker**'s verbal and non-verbal behaviours (e.g., facial muscle movements) [2, 3]. Previous studies [4, 5] have shown that the generation of listener's facial reactions to a speaker's behaviour in dyadic interaction consists of three main stages: Firstly, the listener's perceptual system (e.g., ears and eyes) receives external signals expressed by the speaker, which are pre-processed before being transmitted to the brain for further analysis. Then, the cognitive processor processes the pre-processed signals by taking personalized perception bias into account, resulting in the generation of personalized reaction signals. Finally, the motor processor decodes these personalized signals to the facial muscles, producing corresponding facial reactions. In contrast to most machine learning tasks, the generation of listener's facial reactions to a specific speaker behaviour are characterized by variability and uncertainty [6, 7], meaning that different facial reactions can be expressed by listeners in response to the same speaker behaviour. Existing machine learning (ML)-based Facial Reaction Generation (FRG) models aim to reproduce the real facial reaction expressed under a specific context (called "GT reaction" in this paper) in response to each given speaker behaviour. These models - including Generative Adversarial Networks (GAN) [8, 9], VQ-VAE [7], and person-specific FRG networks [10, 11] - are trained by minimizing L1 or L2 loss between the generated and GT facial reactions. However, this training strategy creates an ill-posed problem for existing FRG models where similar inputs (speaker behaviours) are paired with different labels (listener facial reactions), resulting in a "one-to-many mapping" problem in the training phase. This limitation makes Fig. 1: Our approach predicts an distribution representing multiple different but appropriate facial reactions from each input speaker behaviour, based on which multiple different but appropriate, realistic, and synchronized human listener facial reactions could be generated. it theoretically very challenging for existing approaches to learn good FRG models that can generate diverse, appropriate, and photo-realistic facial reactions in response to speaker behaviours. In this paper, we propose the first deep learning framework to generate multiple appropriate facial reactions in response to each speaker behaviour. Rather than simply reproducing the GT facial reaction expressed by the corresponding listener, our approach generates multiple different but appropriate, realistic, and synchronized facial reactions from the given context. Inspired by the theoretical framework of the human model processor [12], our approach is designed to consist of three modules: (i) a **perceptual processor** that encodes the input speaker audio and facial signals; (ii) a **cognitive processor** that predicts an appropriate facial reaction distribution from the encoded speaker audio-facial representation, which represents multiple different but appropriate facial reactions; and (iii) a reversible Graph Neural Network (GNN)-based **motor processor** that decodes an appropriate facial reaction from the learned distribution. To address the "one-to-many mapping" problem during the training phase, we propose a novel Reversible Multi-dimensional Edge Graph Neural Network (REGNN) as the motor processor. The REGNN is designed to summarize a distribution that represents all appropriate real facial reactions displayed by listeners in training set in response to each input speaker behaviour. The summarised distribution is then used to supervise the training of the cognitive processor by enforcing it to learn an appropriate facial reaction distribution from the input speaker behaviour. This distribution learning strategy transforms the ill-posed training problem, where one input speaker behaviour corresponds to multiple appropriate listener facial reactions, into a well-posed problem, where one input speaker behaviour corresponds to one distribution representing all appropriate facial reactions. We illustrate our approach in Fig. 1 and Fig. 2. The main contributions and novelties of this paper are summarised as follows: * To the best of our knowledge, we present the first deep learning framework capable of generating multiple appropriate, realistic, and synchronized facial reactions in response to a speaker behaviour. Our framework introduces a novel appropriate facial reaction distribution learning (AFRDL) strategy that addresses the ill-posed "one-to-many mapping " problem.This strategy reformulates the problem as a "one-to-one mapping" problem, thus providing a well-defined learning objective. * We propose a novel Reversible Multi-dimensional Edge Graph Neural Network (REGNN) that can forwardly summarise a distribution from multiple real appropriate facial reactions at the training stage, and reversely decode a facial reaction from the predicted facial reaction distribution at the inference stage. * using the proposed approach - appropriate, realistic, and synchronized facial reactions achieving better performances compared to other existing related solutions, and we provide the first open-source code for the multiple appropriate facial reaction generation task. ## II Related Work ### _Facial reaction theory_ During dyadic interactions, the facial reactions of a listener are shaped by a combination of facial muscle movements. These movements are controlled by person-specific cognitive processes that are primarily influenced by the behaviours expressed by the corresponding speaker [13]. Research conducted by Hess et al. [14] also found that the generation of facial reactions is predominantly influenced by individual-specific cognitive processes, which are not only influenced by the speaker's behaviour but also by the listener's personality [15] and emotional states [16]. For instance, individuals who frequently experience fear possess more sensitive and easily stimulated amygdala, rendering them more prone to displaying facial reactions indicative of fear. Similarly, experiencing pleasant emotions triggers the contraction of the zygomatic major muscle, resulting in a smiling facial reaction, while confusion enhances the activity of the corrugator muscle, leading to a furrowed brow expression. Therefore, as summarised in [6], in dyadic interactions, a broad spectrum of different facial reactions might be _appropriate_ in response to a speaker behaviour according to the internal states of the listener. This is because human behavioural responses are stimulated by the context the listener experiences [3], which lead to different but appropriate facial reactions expressed by not only different listeners but also the same listener under different contexts (e.g., external environments or internal states) [6, 17, 18]. A similar hypothesis has been mentioned in a recent facial reaction generation study [7]. ### _Automatic facial reaction generation_ To the best of our knowledge, there have been few studies [19, 20, 21, 10, 7, 8, 9, 11] on automatic facial reaction generation. An early approach [9] proposed a two-stage conditional GAN to generate facial reaction sketches based on the speaker's facial action units (AUs). Their later works [8, 19] exploited more speaker emotion-related features (e.g., facial expression features) to reproduce better facial reactions expressed by listeners. To consider personalized factors in expressing facial reactions, Song et al. [10, 11] used Neural Architecture Search (NAS) to explore a person-specific network for each listener adaptively. Ng et al. [22] extended and combined the cross-attention transformer with VQ-variational auto-encoder (VQ-VAE) [23] model to generate a wide range of diverse facial reactions for each listener by leveraging multi-modal speaker behaviours. Besides generating facial reactions, some studies aiming to generate non-verbal head [21] or gesture reactions [24]. However, none of these have attempted to generate multiple appropriate facial reactions from a speaker behaviour [6]. Note that the approach proposed in this paper is different from previous facial expression/display generation methods [25, 26, 27, 28, 29, 30, 31, 32], where the facial images are generated based on manually defined conditions such pre-defined AUs, landmarks and audio behaviours without considering interaction scenarios (i.e., they do not predict reactions from speaker behaviours). ## III Generation Task definition Given a speaker behaviour \(B_{S}^{t_{1},t_{2}}=\{A_{S}^{t_{1},t_{2}},F_{S}^{t_{1},t_{2}}\}\) at the time \([t_{1},t_{2}]\), the goal is to learn a ML model \(\mathcal{H}\) that can generate multiple different spatio-temporal human facial reactions \(P(F_{L}|B_{S}^{t_{1},t_{2}})=\{p(F_{L}|B_{S}^{t_{1},t_{2}}),\cdots,p(F_{L}|B_{ S}^{t_{1},t_{2}})_{N}\}\) that are **appropriate** for responding to \(B_{S}^{t_{1},t_{2}}\), which is formulated as: \[P(F_{L}|B_{S}^{t_{1},t_{2}})=\mathcal{H}(B_{S}^{t_{1},t_{2}}), \tag{1}\] where \(P(F_{L}|B_{S}^{t_{1},t_{2}})=\{p(F_{L}|B_{S}^{t_{1},t_{2}})_{1}\neq\cdots\neq p (F_{L}|B_{S}^{t_{1},t_{2}})_{N}\}\) and each generated facial reaction \(p(F_{L}|B_{S}^{t_{1},t_{2}})_{n}\) should be similar to at least one real facial reaction \(f_{L}(B_{S}^{t_{1},t_{2}})_{m}\) that is appropriate in response to \(B_{S}^{t_{1},t_{2}}\) in the training set: \[p(F_{L}|B_{S}^{t_{1},t_{2}})_{n}\approx f_{L}(B_{S}^{t_{1},t_{2}})_{m}\in F_{L }(B_{S}^{t_{1},t_{2}}), \tag{2}\] where \(F_{L}(B_{S}^{t_{1},t_{2}})=\{f_{L}(B_{S}^{t_{1},t_{2}})_{1},\cdots,f_{L}(B_{S} ^{t_{1},t_{2}})_{M}\}\) denotes a set of real facial reactions expressed by listeners in the training set, which are appropriate in response to the speaker behaviour \(B_{S}^{t_{1},t_{2}}\). The above definition corresponds to the offline facial reaction generation task defined by [6] (See [6] for details). ## IV The proposed approach This section presents our novel FRG approach. We first introduce the pipeline in Sec. IV-A, and then detail the proposed appropriate facial reaction distribution learning (AFRDL) strategy in Sec. IV-B. Finally, we provide the details of the novel REGNN in Sec. IV-C, which represents one of the main components of the AFRDL strategy. ### _Facial reaction generation framework_ This section develops an appropriate FRG model, \(\mathcal{H}=\{\textbf{Enc},\textbf{Cog},\textbf{Mot}\}\), which can generate multiple diverse and appropriate human facial reactions in response to each speaker audio-facial behaviour. As shown in Fig. 2, our model consists of three main modules inspired by the Human Model Processor (HMP) [12]: (i) **Perceptual Processor** (\(\textbf{Enc}=\{\textbf{Enc}_{\text{A}},\textbf{Enc}_{\text{F}}\}\)) that encodes each raw speaker audio and facial behaviours into a pair of latent audio and facial representations; (ii) **Cognitive Processor (Cog)** that predicts a distribution representing all appropriate listener facial reactions based on the produced speaker audio and facial representations; and (iii) the REGNN-based **Motor Processor (Mot)** that samples and generates an appropriate facial reaction from the predicted distribution. We also illustrate the model pipeline in Fig. 2. Specifically, the **Perceptual Processor** is a two branch encoder consisting of a facial encoder \(\textbf{Enc}_{\text{F}}\) (Swin-Transformer [33]) and an audio encoder \(\textbf{Enc}_{\text{A}}\) (VGGish [34]). It takes a speaker's audio and facial signals \(A_{S}^{t_{1},t_{2}}\) and \(F_{S}^{t_{1},t_{2}}\) expressed at the time interval \([t_{1},t_{2}]\) as the input, and generates a pair Fig. 2: Overview of the proposed multiple appropriate facial reaction generation framework. **Step 1:** the **Perceptual Processor** first encodes facial and audio representations from the perceived audio-visual speaker behaviours. **Step 2:** the **Cognitive Processor** then predicts a distribution from the combined audio-visual representation, which represents all appropriate facial reactions in response to the input speaker behaviour. **Step 3:** the REGNN-based **Motor Processor** finally samples and reversely decodes multiple appropriate facial reactions from the learned distribution. of latent audio and facial representations \(\bar{A}_{S}^{t_{1},t_{2}}\) and \(\bar{F}_{S}^{t_{1},t_{2}}\) as: \[\bar{A}_{S}^{t_{1},t_{2}} =\textbf{Enc}_{\textbf{A}}(A_{S}^{t_{1},t_{2}}), \tag{3}\] \[\bar{F}_{S}^{t_{1},t_{2}} =\textbf{Enc}_{\textbf{F}}(F_{S}^{t_{1},t_{2}}).\] Based on \(\bar{A}_{S}^{t_{1},t_{2}}\) and \(\bar{F}_{S}^{t_{1},t_{2}}\), the **Cognitive Processor** first aligns and combines them as a latent speaker audio-facial behaviour representation \(\bar{B}_{S}^{t_{1},t_{2}}\) based on the same attention-based strategy introduced in [35]. Rather than predicting a specific facial reaction from \(\bar{B}_{S}^{t_{1},t_{2}}\), we propose to predict an _appropriate facial reaction distribution graph representation_\(\bar{Z}_{p}(B_{S}^{t_{1},t_{2}})\) from \(\bar{B}_{S}^{t_{1},t_{2}}\), as multiple facial reactions may be appropriate for responding to each input speaker behaviour. This means that \(Z_{p}(B_{S}^{t_{1},t_{2}})\) represents the distribution of all appropriate facial reactions. This is achieved by \(I\) projection heads (i.e., \(I\) fully connected (FC) layers), where each head learns a \(D\)-dimensional vector that is specifically treated as a node feature for \(\bar{Z}_{p}(B_{S}^{t_{1},t_{2}})\). These processes can be formulated as: \[\bar{Z}_{p}(B_{S}^{t_{1},t_{2}})=\textbf{COG}(\bar{B}_{S}^{t_{1},t_{2}}), \tag{4}\] where \(\bar{Z}_{p}(B_{S}^{t_{1},t_{2}})\in\mathbb{R}^{I\times P}\) (\(I\) nodes). This way, _the "one-to-many mapping" problem occurring in the FRG task is addressed by re-formulating it into a 'one-to-one mapping' problem at the training stage (one speaker behaviour-to-one distribution representing multiple appropriate facial reactions)_. Then, we feed all node features to our multi-dimensional edge feature learning (MEFL) block that consists of \(D\) attention operations, where each attention operation generates an \(I\times I\) attention map describing a specific type of mutual relationship between a pair of nodes. Consequently, \(D\) attention maps describing \(D\) types of relationship cues would be produced. Thus, a pair of multi-dimensional directed edge features can be obtained to describe the relationship between each pair of nodes, i.e., each multi-dimensional edge feature \(e_{i,j}^{\bar{Z}_{p}}\) is a \(D\) dimensional vector produced by concatenating the values at the \(i_{\text{th}}\) row and \(j_{\text{th}}\) column of \(D\) attention maps. Here, we only keep \(\mathcal{K}\) directed edges starting from each node, which has largest norms. Since human facial behaviours can be interpreted by various medium and high level primitives (e.g., facial Action Units (AUs) and affects) which are usually mutually correlated [36], we propose to describe each spatio-temporal facial reaction by \(I\) facial attribute time-series primitives, which are represented as a graph consisting of \(I\) nodes, where the presence of each directed edge and its multi-dimensional edge feature is also defined by a MEFL block. Consequently, the relationship between each pair of facial attribute time-series (nodes) can be explicitly described by a pair of multi-dimensional edge features. To allow such facial reaction graph representation to be generated from the distribution \(\bar{Z}_{p}(B_{S}^{t_{1},t_{2}})\), we propose a novel Reversible Multi-dimensional Edge Graph Neural Network (REGNN) as the **Motor Processor**. It first samples an appropriate facial reaction latent graph representation \(\hat{g}_{p}(B_{S}^{t_{1},t_{2}})_{n}\in\mathbb{R}^{I\times D}\) from the predicted distribution \(\bar{Z}_{p}(B_{S}^{t_{1},t_{2}})\), and then decides it as a facial reaction graph representation \(\hat{g}_{p}(B_{S}^{t_{1},t_{2}})_{n}\in\mathbb{R}^{I\times D}\). Here, different facial reaction latent graph representations can be sampled from \(\bar{Z}_{p}(B_{S}^{t_{1},t_{2}})\), and thus multiple different facial reactions \(\hat{G}_{p}(B_{S}^{t_{1},t_{2}})=\{\hat{g}_{p}(B_{S}^{t_{1},t_{2}})_{1},\cdots,\hat{g}_{p}(B_{S}^{t_{1},t_{2}})_{N}\}\) can be generated. This can be formulated as: \[\hat{g}_{p}(B_{S}^{t_{1},t_{2}})_{n}=\textbf{Mot}^{-1}(Z_{p}(B_{S}^{t_{1},t_{2 }})), \tag{5}\] where \(\text{Mot}^{-1}\) denotes that motor processes that reversely infers facial reactions from the predicted distribution \(Z_{p}(B_{S}^{t_{1},t_{2}})\). Subsequently, a 2D facial reaction sequence \(p(F_{L}|B_{S}^{t_{1},t_{2}})_{n}\) can be produced from \(\hat{g}_{p}(B_{S}^{t_{1},t_{2}})_{n}\). To train the **Cognitive Processor** that can accurately predict the distribution of all appropriate facial reactions in response to each input speaker behaviour, we propose a novel Reversible Multi-dimensional Edge Graph Neural Network (REGNN) as the motor processor, which learns an appropriate real facial reaction distribution graph representation \(Z_{L}(B_{S}^{t_{1},t_{2}})\) for each speaker behaviour \(B_{S}^{t_{1},t_{2}}\), representing all listeners' real facial reactions that are appropriate as the corresponding facial reaction to \(B_{S}^{t_{1},t_{2}}\). ### _Appropriate facial reaction distribution learning_ Our facial reaction distribution generation strategy aims to address the "one-to-many mapping" problem occurring in FRG models' training (i.e., one input speaker behaviour corresponds to multiple appropriate facial reaction labels) by re-formulating it as a "one-to-one mapping" problem (i.e., one input speaker behaviour corresponds to one distribution representing multiple appropriate facial reactions). Fig. 3: Illustration of the proposed AFRDL strategy (Pseudo-code is provided in the supplementary material). Given a speaker behaviour, REGNN first encodes all appropriate real facial reactions as a set of latent graph representations. These representations are then summarised as an appropriate real facial reaction distribution to supervise the Cognitive Processor’s training, where the summarised distribution is a graph representation consisting of multiple nodes, and each node is represented by a Gaussian Mixture Model (GMM) summarising multiple facial attribute time-series corresponding to multiple appropriate real facial reactions. Here, the MSE loss function is employed to enforce the distribution predicted by the Cognitive Processor to be similar to the summarised real facial reaction distribution. As shown in Fig. 3, given an audio-visual speaker behaviour \(B_{S}^{t_{1},t_{2}}=\{A_{S}^{t_{1},t_{2}},F_{S}^{t_{1},t_{2}}\}\), and its corresponding multiple appropriate real facial reactions \(F_{L}(B_{S}^{t_{1},t_{2}})\) expressed by human listeners of the training set, we first construct a set of real facial reaction graph representations \(G_{L}(B_{S}^{t_{1},t_{2}})=\{g_{L}(B_{S}^{t_{1},t_{2}})_{1},\cdots,g_{L}(B_{S}^{ t_{1},t_{2}})_{M}\}\) to represent all appropriate real facial reactions defined by \(F_{L}(B_{S}^{t_{1},t_{2}})\), where each node in a graph representation describes a facial attribute time-series and each edge explicitly describes the relationship between a pair of nodes. At the same time, the presence of each directed edge and its multi-dimensional edge feature is also defined by a MEFL block. Since these representations have the same property, i.e., each describes an appropriate facial reaction in response to \(B_{S}^{t_{1},t_{2}}\), we hypothesize that they are drawn from the same distribution. Subsequently, we train the REGNN by enforcing it to map all appropriate real facial reaction graph representations in response to the same speaker behaviour onto a 'ground-truth' (GT) real appropriate facial reaction distribution \(Z_{L}(B_{S}^{t_{1},t_{2}})\) as: \[\begin{split}\bar{g}_{L}(B_{S}^{t_{1},t_{2}})_{m}& =\text{Mot}(g_{L}(B_{S}^{t_{1},t_{2}})_{m}),\\ \bar{g}_{L}(B_{S}^{t_{1},t_{2}})_{m}&\sim Z_{L}(B_{ S}^{t_{1},t_{2}}),\quad m=1,2,\cdots M\\ \textbf{subject to}\;\;f_{L}(B_{S}^{t_{1},t_{2}})_{m}& \in F_{L}(B_{S}^{t_{1},t_{2}}),\end{split} \tag{6}\] where \(\bar{g}_{L}(B_{S}^{t_{1},t_{2}})_{m}\) denotes a latent graph representation produced from \(g_{L}(B_{S}^{t_{1},t_{2}})_{m}\), and all latent graph representations are expected to follow the same distribution \(Z_{L}(B_{S}^{t_{1},t_{2}})\). This is achieved by minimizing the sum of L1 distances obtained from all the corresponding latent graph representation pairs in an unsupervised manner: \[\mathcal{L}_{1}=\sum_{m_{1}=1}^{M-1}\sum_{m_{2}=m_{1}+1}^{M}L1(\hat{g}_{L}(B_{ S}^{t_{1},t_{2}})_{m_{2}},\hat{g}_{L}(B_{S}^{t_{1},t_{2}})_{m_{1}}). \tag{7}\] Inspired by the fact that Guassian Mixture Model (GMM) is powerful to describe distributed subpopulations (e.g., individual appropriate facial reactions) within an overall population (e.g., all appropriate facial reactions), we propose a novel **Gaussian Mixture Graph Distribution (GMGD)** to represent \(Z_{L}(B_{S}^{t_{1},t_{2}})=\{v_{1}^{2},v_{2}^{2},\cdots,v_{I}^{2}\}\), where each node \(v_{i}^{2}\) in \(Z_{L}(B_{S}^{t_{1},t_{2}})\) is represented by a Gaussian Mixture Model (GMM) consisting of \(M\) Gaussian distributions (defined as \(\mathcal{N}(\{\mu_{i}^{1},\cdots,\mu_{i}^{M}\},\{\sigma_{i}^{1},\cdots,\sigma_ {i}^{M}\})\)). For each node, the mean values \(\mu_{i}^{1},\cdots,\mu_{i}^{M}\) of \(M\) Gaussian distributions are defined by the \(i_{\text{th}}\) node features \(\bar{v}(L)_{i}^{1},\cdots,\bar{v}(L)_{i}^{M}\) of the corresponding \(M\) latent graph representations \(\bar{g}_{L}(B_{S}^{t_{1},t_{2}})_{1},\cdots,\bar{g}_{L}(B_{S}^{t_{1},t_{2}})_ {M}\) produced by the motor processor, while standard deviations \(\{\sigma(L)_{i}^{1},\cdots,\sigma(L)_{i}^{M}\}\) are empirically defined. These can be formulated as: \[\mu_{i}^{m}=v_{i}^{m}\in\bar{g}_{L}(B_{S}^{t_{1},t_{2}})_{m},\;\;m=1,2,\cdots,M \tag{8}\] where \(v_{i}^{m}\) is the \(i_{\text{th}}\) node in the \(m_{\text{th}}\) latent graph representation \(\bar{g}_{L}(B_{S}^{t_{1},t_{2}})_{m}\). As a result, the **Cognitive Processor** is trained under the supervision of the GT real facial reaction distribution \(Z_{L}(B_{S}^{t_{1},t_{2}})\), i.e., it is trained to predict an appropriate facial reaction distribution graph representation \(\bar{Z}_{p}(B_{S}^{t_{1},t_{2}})\) from \(B_{S}^{t_{1},t_{2}}\) (formulated in Eq. (4)). This training process is achieved by minimizing the L2 distance between \(Z_{L}(B_{S}^{t_{1},t_{2}})\) and \(\bar{Z}_{L}(B_{S}^{t_{1},t_{2}})\) as: \[\mathcal{L}_{2}=\text{MSE}(\bar{Z}_{p}(B_{S}^{t_{1},t_{2}}),Z_{L}(B_{S}^{t_{1},t _{2}})) \tag{9}\] where MSE denotes the Mean Square Error. At the inference stage, the well-trained motor processor first samples a facial reaction graph representation \(\bar{g}_{p}(B_{S}^{t_{1},t_{2}})_{n}\) from the distribution \(\bar{Z}_{p}(B_{S}^{t_{1},t_{2}})\) predicted by the Cognitive Processor, and then reversely decodes it as facial reaction. ### _Reversible Multi-dimensional Edge Graph Neural Network_ The REGNN can forwardly encode a GT facial reaction distribution to describe all appropriate real facial reactions in response to the input speaker behaviour at the training stage, which plays a key role in supervising the cognitive processor's training. As shown in Fig. 4, the REGNN network consists of \(N\) REGNN layers, which forwardly takes a graph \(\mathcal{G}^{0}(\mathcal{V}^{0},\mathcal{E}^{0})\) as the input and generates a graph \(\mathcal{G}^{N}(\mathcal{V}^{N},\mathcal{E}^{N})\) as the output, where \(\mathcal{V}^{0}=\{v_{1}^{0},v_{2}^{0},\cdots,v_{I}^{0}\}\) and \(\mathcal{E}=\{e_{i,j}^{0}|v_{i}^{0},v_{j}^{0}\in\mathcal{V}\;\;\&\;\; \mathcal{A}_{i,j}=1\}\) denote a set of node and edge features contained in the input graph \(\mathcal{G}^{0}(\mathcal{V}^{0},\mathcal{E}^{0})\), respectively, and \(\mathcal{A}\) is the adjacency matrix that defines the connectivity between nodes. Importantly, the REGNN can also reversely output \(\mathcal{G}^{0}(\mathcal{V}^{0},\mathcal{E}^{0})\) from \(\mathcal{G}^{N}(\mathcal{V}^{N},\mathcal{E}^{N})\). In this paper, since only node features represent the target facial attributes/distributions, the REGNN is designed for node feature reasoning. In the following, we present the details of both the forward propagation and reverse propagation mechanisms of the REGNN. **Forward propagation:** The \(n_{\text{th}}\) REGNN layer takes: (i) the node feature set \(\mathcal{V}^{n-1}\) generated from the \(n-1_{th}\) layer as its input node features, which is pre-processed by a normalization layer and a Sigmoid activation;; and (ii) the initial edge feature set \(\mathcal{E}^{0}\) as its input edge features, and then outputs a graph \(\mathcal{G}^{n}(\mathcal{V}^{n},\mathcal{E}^{n})\). This setting ensures the reversibility of the REGNN (explained and derived in the Supplementary Material). Specifically, the \(n_{\text{th}}\) REGNN layer first learns a set of edge features \(\mathcal{E}^{n}\) based on not only \(\mathcal{E}^{0}\) but also \(\mathcal{V}^{n-1}\) Fig. 4: The proposed Reversible Multi-dimensional Edge Graph Neural Network (REGNN) is made up of \(N\) REGNN layers, where each layer can forwardly and reversely propagate node representations. Pseudo-code of its propagation is provided in the supplementary material. allowing \(\mathcal{V}^{n-1}\)-related edge features to be used for updating \(\mathcal{V}^{n-1}\) to \(\mathcal{V}^{n}\), i.e., it computes each directed edge feature \(e_{j,i}^{n}\in\mathcal{E}^{n}\) starting from the node \(v_{j}^{n-1}\) to \(v_{i}^{n-1}\) as: \[e_{j,i}^{n}=\frac{a_{j,i}^{n}e_{j,i}^{0}}{\sum_{v_{k}^{n-1}\in\mathcal{N}_{v_{i }^{n-1}}}a_{k,i}^{n}e_{k,i}^{0}}, \tag{10}\] where \(e_{j,i}^{0}\in\mathcal{E}_{0}\) denotes an initial directed edge feature. The employed \(a_{j,i}^{n}\) is a learnable relationship coefficient that captures correlations between the node \(v_{i}^{n-1}\) and its high-order neighboring nodes to define \(\mathcal{V}^{n-1}\)-related and context-aware (i.e., aware of the high-order neighbors) edge feature \(e_{j,i}^{n}\), which contributes to updating \(v_{j}^{n-1}\) to \(v_{j}^{n}\). The term \(\sum_{v_{k}^{n-1}\in\mathcal{N}_{v_{i}^{n-1}}}a_{k,i}^{n}e_{k,i}^{0}\) regularizes the obtained edge feature. Here, the \(a_{j,i}^{n}\) can be computed as: \[a_{j,i}^{n}=\frac{\exp(v_{i}^{n-1}\mathbf{W}_{q}^{n})(v_{j}^{n-1}\mathbf{W}_{ m})^{\top}}{\sum_{v_{k}^{n-1}\in\mathcal{N}_{v_{i}^{n-1}}}\exp\left((v_{i}^{n-1} \mathbf{W}_{q}^{n})(v_{k}^{n-1}\mathbf{W}_{m}^{n})^{\top}\right)}, \tag{11}\] where \(\mathbf{W}_{q}\) and \(\mathbf{W}_{m}\) are learnable weight vectors and \(\mathcal{N}_{i}\) denotes the adjacent node set of \(v_{i}^{n-1}\). This way, the learned \(a_{j,i}^{n}\) captures information from not only the relationship cues between the corresponding node features \(v_{j}^{n-1}\) and \(v_{i}^{n-1}\) but also the context of the node \(v_{i}^{n-1}\) (i.e., its high-order neighboring nodes \(\mathcal{N}_{v_{i}^{n-1}}\)). Building upon these learned edge features, the REGNN layer then update each node feature \(v_{i}^{n}\) based on: (i) node feature \(v_{i}^{n-1}\in\mathcal{G}^{n-1}\) itself; and (ii) the message \(\hat{v}_{i}^{n-1}\) aggregated from all its adjacent node, which is decided by its adjacent node feature set (\(\mathcal{N}_{v_{i}^{n-1}}\)) and the corresponding updated directed edge features \(e_{j,i}^{n}\in\mathcal{E}^{n}\) that point to \(v_{i}^{n-1}\) as: \[v_{i}^{n}=v_{i}^{n-1}+\hat{v}_{i}^{n-1}, \tag{12}\] where the message \(\hat{v}_{i}^{n-1}\) is computed as: \[\hat{v}_{i}^{n-1}=\mathbf{W}_{c}^{n}\sum_{v_{j}^{n-1}\in\mathcal{N}_{v_{i}^{n- 1}}}\left(e_{j,i}^{n}\circ v_{j}^{n-1}\right), \tag{13}\] where \(\mathbf{W}_{c}^{n}\in\mathbb{R}^{1\times D}\) is a learnable weight vector that combines messages passed by all dimensions (\(D\) dimensions) of the multi-dimensional edge \(e_{j,i}^{n}\). **Reverse propagation:** The proposed REGNN layer is able to reversely infer each input node feature \(v_{i}^{n-1}\) from its output node features \(v_{i}^{n}\), which can be formulated as: \[\begin{split}& x_{k}=v_{i}^{n}-\varphi(x_{k-1}),\\ & v_{i}^{n-1}=x_{k},\qquad\textbf{subject to}\qquad x_{k}=x_{k-1},\end{split} \tag{14}\] where the function \(\varphi\) is defined as the combination of the Sigmoid activation, edge updating (Eqa. 10) and message passing functions (Eqa. 13); \(x_{k}\) is computed iteratively until it is converged to \(x_{k}=x_{k-1}\). Here, the \(x_{0}\) is set as a non-zero random value. To achieve the aforementioned reversibility (i.e., converged at the \(x_{k}=x_{k-1}\)), which is the key for decoding an appropriate facial reaction from the predicted distribution, the \(\mathbf{W}_{c}^{n}\) defined in Eq. (13), as well as \(W_{q}^{n}\) and \(W_{k}^{n}\) are optimized to enable the function \(varphi\) used for both forward and reverse propagation to satisfy the following condition: \[\varphi(v_{i}^{n-1})=\frac{\varphi(\text{Sig}(v_{i}^{n-1}))}{1+2\|\ \mathbf{W}_{q}^{n}\mathbf{W}_{k}^{n \top}\|_{2}}, \tag{15}\] where Sig denotes the Sigmoid activation function. This setting allows the function \(\varphi\) to be a contraction mapping function that satisfies the Lipschitz continuous and Lipschitz constant less than \(1\)[37, 38], and thus adhering to the fixed point theorem, i.e., the function \(\varphi\) is learned to ensure a fixed point \(x\) to satisfy the equation \(x=v_{i}^{n}-\varphi(x)\). The proof and derivation for defining Eq. (15) for ensuring the reversibility of the proposed REGNN are provided in the supplementary material. ## V Experiments This section first provides the details of experimental settings in Sec. V-A, then, Sec. V-B compares our approach with previous facial reaction generation solutions. Finally, we conduct a series of ablation studies to investigate the contributions of different modules. ### _Experimental settings_ **Dataset.** This paper evaluates the proposed approach based on the video conference clips recorded under various dyadic interaction settings, which are provided by two publicly available datasets: NoXI [39] and RECOLA [40]. As there are only three valid video pairs in RECOLA dataset, we follow a public facial reaction challenge 1 to combine these two datasets, resulting in 2962 pairs of audio-visual speaker-listener dyadic interaction clips, including 1594 pairs of training clips, 562 pairs of validation clips, and 806 pairs of test clips, where each clip is 30 seconds long. All appropriateness labels are obtained by the same strategy provided in [6]. Note that the employed dataset is different from the challenge, as the UDIVA dataset [41] is not included because it is recorded under in-person dyadic interactions (with a lateral camera view which captures both participants), where the profile of participants' faces are frequently recorded. Footnote 1: [https://sites.google.com/cam.ac.uk/react2023/home](https://sites.google.com/cam.ac.uk/react2023/home) **Implementation details.** In our experiments, the Perceptual Processor takes the facial image sequence of an entire video clip and log-mel spectrogram features of an entire audio clip as input. At the training stage, we employ the Swin-Transformer [42] pre-trained on FER2013 [43] and pre-trained Vggish model [34] as the initial facial and audio feature extraction models. Then, the Adam optimizer [44] with a stochastic optimization strategy was employed to train the entire framework in an end-to-end manner using an initial learning rate of \(10^{-4}\) and weight decay of \(5\times 10^{-4}\). The maximum number of epochs was set to \(100\), with learning rate decay performed at the \(20^{th}\) and \(50^{th}\) epochs, each with a decay rate of \(0.1\). During the inference stage, we empirically set the \(\sigma\) of all GMM in the GMGD as \(0.6\), based on which the REGNN reversely samples and deco In this paper, we use the model proposed in [25] to generate facial images from all predicted AUs. **Evaluation metrics.** We employed the four sets of metrics defined in [6] to evaluate the generated facial reactions, namely in terms of: (i) **Appropriateness**: the distance and correlation between generated facial reactions and their most similar appropriate facial reaction using FRDist and FRCorr. In addition, we also report the Pearson Correlation Coefficient (PCC) between predictions and their most similar appropriate real facial reaction; (ii) **Diversity**: the variation among 1) frames in each generated facial reaction (**FRVar**), 2) multiple facial reactions generated from the same speaker behaviour (**FRDiv**), and 3) facial reactions generated for different speaker behaviours (**FRDvs**); (iii) **Realism**: employing Frechet Inception Distance (FID) used in [6] (**FRRea**); and (iv) **Synchrony** is measured by Time Lagged Cross Correlation (TLCC) between the speaker facial behaviour and the corresponding generated facial reaction. ### _Comparison to related works_ As this represents the first study aiming to generate multiple appropriate facial reactions, we compare our approach with different reproduced baselines that have been previously used for generating facial reactions: * **Huang et al. (S) [8]:** This method employs a two-stage conditional GAN. In the first stage, feature vectors composed of \(17\) different facial action units of the speaker's facial behaviour are used as the condition to predict the listener's facial landmarks. Then, in the second stage, the obtained facial landmarks are employed as the condition to generate the corresponding facial reaction sequence. Here, the speaker's facial behaviour from time \(t-9\) to \(t\) is used to predict the \(t_{\text{th}}\) facial reaction frame. Consequently, Fig. 5: Visualisation of the facial reactions generated from different approaches, where early approaches [8, 9, 45] generated some very low-quality facial images, while the predictions of a recent approach [7] is quite different from the ground-truth (i.e., low appropriateness and synchrony). Our approach generated multiple diverse but appropriate, realistic, and synchronized facial reactions from the input speaker behaviour. Fig. 6: Examples of the low-quality facial images generated from the facial reaction attributes predicted by competitors. this method cannot predict the first ten facial reaction frames. Therefore, in our experiments, its input size is set to \((750,25)\) and the output size is set to \((740,25)\). * **Huang et al. (C) [9]:** Building upon [8], this method utilizes a feature vector composed of \(8\) distinct facial expressions of the speaker behaviour as a conditioning input. The final facial reaction is generated using a two-stage GAN similar to [9]. As both [8] and [9] require the inputs from time \(t-9\) to \(t\) to predict the outputs at time \(t\), they cannot make predictions for the first ten frames of the listener's facial reaction. Therefore, in our experiments, their input size is set to \((750,25)\), and the output is set to \((740,25)\). * **UNet [45]:** UNet is a widely-used model in generative tasks, consisting of the symmetric encoder and decoder components. In this experiment, both the encoder and decoder are set to four layers. The input and output of the network are both time series data with a shape of \((750,25)\). * **Ng et al. [7]:** This approach trains a VQ-VAE model to predict a discrete encoding from the input speaker behaviour, representing the corresponding listener's facial reaction. At the inference stage, a transformer-based predictor module is utilized to perform a lookup on the discrete encoding to predict the listener's facial reaction. In our experiments, we customized the size of hidden layers in the model to accommodate the input sequence size of \((750,25)\) and output size of \((750,25)\). The implementation details of these reproduced approaches are provided in the supplementary material. Table I shares the results achieved in our experiments demonstrates that our approach outperforms all competitors in generating appropriate, realistic, and synchronized facial reactions, as indicated by lower FRDist distances, Realism, Synchrony values, and higher correlations with the most similar real facial reactions. Specifically, our approach achieves improvements of \(4.37\) and \(2.74\) in FRDist, \(0.071\) and \(0.024\) in FRCorr, and \(0.095\) and \(0.007\) in PCC over the condition GAN-based approach [9] and the recently proposed VQ-VAE based approach [7], respectively. We also compare our predictions with predictions achieved by [7] in Fig. 7. Furthermore, our approach generates diverse facial reactions in response to different speaker behaviours, as well as decent diversity among frames of each generated facial reaction. Our approach can also generate different facial reactions in response to each speaker behaviour (visualized in Fig. 5). These results demonstrate the effectiveness of our AFRDL strategy in generating multiple different but appropriate, realistic, and synchronized facial reactions. It should be noted that while our approach did not generate facial reactions with as much diversity as C-GAN based [8, 9] and VQ-VAE based approaches [7], such high diversity results are partially associated to the generation of abnormal facial reactions (i.e., facial behaviours that are not appropriate or cannot be properly expressed by humans (illustrated in Fig. 6)), which is reflected by their much worse appropriateness, realism, and synchrony performances. ### _Ablation studies_ In this section, we first conduct a series of ablation studies to evaluate the effectiveness/importance of (i) each modality of the speaker behaviour; (ii) the proposed appropriate facial reaction distribution learning (AFRDL) strategy; (iii) the proposed reversible graph model (REGNN); and (iv) the multi-dimensional edge feature learning (MEFL) module. We also provide the sensitivity analysis for two main variables: (i) \(\sigma\) used in defining the Gaussian Mixture Graph Distribution (GMGD) \(Z_{L}(B_{S}^{t_{1},t_{2}})\); and (ii) the dimension \(D\) of the multi-dimensional edge feature output from the MEFL module, in the Supplementary Material. **Contributions of different modalities.** The experimental results presented in Table II reveal that both audio and facial modalities of the speaker behaviour offer valuable cues for generating appropriate facial reactions. In particular, the facial behaviours of the speaker exhibit a greater impact on the performance, with better performances in terms of appropriateness, diversity, and realism. Since feeding both audio and visual speaker behaviours results in the best performance, it suggests that audio and visual cues from the speaker behaviour are complementary and relevant for the generation of appropriate facial reactions. Importantly, the proposed approach outperforms several existing methods [7, 8, 9] even when either audio or visual modality of the speaker behaviour alone is utilized, further validating our proposed framework's effectiveness. **Appropriate facial reaction distribution learning (AFRDL) strategy.** Table II reports a comparative analysis of the effectiveness of the proposed AFRDL strategy, where we developed a variant of our framework with the same architecture but different training strategy. The variant was trained using MSE loss, which minimizes the difference between the generated facial reaction and the GT real facial reaction of the input speaker behaviour. The results indicate that the proposed AFRDL strategy is crucial in generating high-quality facial reactions, as the variant achieved significantly worse performance in terms of appropriateness, realism, and synchrony. **REGNN vs. reversible CNN.** Table II and Fig. 8 also compare the performance achieved by the proposed reversible GNN (REGNN) with a widely-used reversible CNN (i-ResNet [46]) to investigate the effectiveness of our REGNN, while keeping the rest of the framework and training strategy unchanged. The results demonstrate that the REGNN-based systems (i.e., single-value edge graph-based system and multi-dimensional edge graph-based system) outperform the i-ResNet-based system with substantial improvements across all metrics of appropriateness, realism, and synchrony. As discussed in Sec. V-B, the i-ResNet-based system sometimes generates abnormal facial reactions, which may account for its better diversity performance (illustrated in Fig. 6). In other words, the proposed REGNN predicts more appropriate facial reactions and allows the generated facial image sequences to be more realistic. We hypothesis that this is because the REGNN-based system can explicitly represents the task-specific relationship between each pair of facial attributes in the form graph representation, and thereby they can be better modeled. **MEFL module.** Finally, we found that the proposed MEFL module also provides clear improvement for facial reaction generation. As seen in Table II, the additional usage of the multi-dimensional edge features generated by our MEFL module provides large improvements in terms of all appropriateness metrics (i.e., \(15\%\), \(135\%\), and \(59\%\) relative improvements Fig. 8: Visualisation of the learned distributions. It is clear that the distribution learned by the proposed REGNN (depicted in (b)) are more discriminative than i-ResNet (depicted in (a)). Fig. 7: Visualisation of the facial attribute predictions. Although our approach is trained to generate appropriate facial reactions, the facial attributes predicted by our approach are still highly correlated with the GT real facial reaction. in FRDist, FRCorr, and PCC, respectively) as well as two diversity metrics (i.e., FRVar and FRDvs), showing that the task-specific relationship between facial attributes is complex, and thus could be better modeled by multi-dimensional edge features rather than single-value edge features. In contrast, the multi-dimensional edge features only have small impacts on the generated facial reactions' realism and synchrony performances. ## VI Conclusion In this paper, we propose the first deep learning-based framework that opens up a new avenue of research for predicting multiple appropriate human facial reactions to a speaker behaviour. It is the first work that reformulates the "one-to-many mapping" problem occurring when training FRG models as a 'one-to-one mapping' problem (one speaker behaviour-to-one distribution representing multiple appropriate facial reactions), where a novel reversible GNN (REGNN) and a multiple appropriate facial reaction distribution learning (AFRDL) strategy are proposed. As the first specifically designed multiple appropriate FRG model, experimental results show that: (i) our approach can generate multiple diverse but appropriate, realistic, and synchronized facial reactions in response to each speaker behaviour, and achieve greater performance in appropriateness, realism, and synchrony metrics as compared to all the reproduced existing works; (ii) the proposed REGNN-based facial reaction distribution learning contributes substantially to the promising appropriateness, realism, and synchrony performances achieved by our approach; (iii) both audio and facial speaker behaviours provide relevant and complementary information; (iv) the proposed REGNN is crucial for the success of the AFRDL strategy; and (v) the MEFL module is crucial in generating appropriate facial reactions, as multi-dimensional edge features can comprehensively model task-specific relationships among facial attributes. **Limitations and future work:** As the first multiple appropriate FRG framework, this paper only predicted reactions based on speakers' non-verbal behaviours, which did not achieved very decent performance. Another limitation is that the facial reaction distribution of different speaker behaviours sometimes are similar and thus not discriminative. Our future work will focus on (i) developing more advanced generative algorithms; (ii) considering both verbal and non-verbal behaviours of speakers; and (iii) investigating better ways to represent appropriate facial reaction distributions.
2304.13266
C2PI: An Efficient Crypto-Clear Two-Party Neural Network Private Inference
Recently, private inference (PI) has addressed the rising concern over data and model privacy in machine learning inference as a service. However, existing PI frameworks suffer from high computational and communication costs due to the expensive multi-party computation (MPC) protocols. Existing literature has developed lighter MPC protocols to yield more efficient PI schemes. We, in contrast, propose to lighten them by introducing an empirically-defined privacy evaluation. To that end, we reformulate the threat model of PI and use inference data privacy attacks (IDPAs) to evaluate data privacy. We then present an enhanced IDPA, named distillation-based inverse-network attack (DINA), for improved privacy evaluation. Finally, we leverage the findings from DINA and propose C2PI, a two-party PI framework presenting an efficient partitioning of the neural network model and requiring only the initial few layers to be performed with MPC protocols. Based on our experimental evaluations, relaxing the formal data privacy guarantees C2PI can speed up existing PI frameworks, including Delphi [1] and Cheetah [2], up to 2.89x and 3.88x under LAN and WAN settings, respectively, and save up to 2.75x communication costs.
Yuke Zhang, Dake Chen, Souvik Kundu, Haomei Liu, Ruiheng Peng, Peter A. Beerel
2023-04-26T03:40:18Z
http://arxiv.org/abs/2304.13266v1
# C\({}^{2}\)PI: An Efficient Crypto-Clear Two-Party Neural Network Private Inference ###### Abstract Recently, private inference (PI) has addressed the rising concern over data and model privacy in machine learning inference as a service. However, existing PI frameworks suffer from high computational and communication costs due to the expensive multi-party computation (MPC) protocols. Existing literature has developed lighter MPC protocols to yield more efficient PI schemes. We, in contrast, propose to lighten them by introducing an empirically-defined privacy evaluation. To that end, we reformulate the threat model of PI and use inference data privacy attacks (IDPAs) to evaluate data privacy. We then present an enhanced IMPA, named distillation-based inverse-network attack (DINA), for improved privacy evaluation. Finally, we leverage the findings from DINA and propose \(\mathbb{C}^{2}\)PI, a two-party PI framework presenting an efficient partitioning of the neural network model and requiring only the initial few layers to be performed with MPC protocols. Based on our experimental evaluations, relaxing the formal data privacy guarantees \(\mathbb{C}^{2}\)PI can speed up existing PI frameworks, including Delphi [1] and Cheetah [2], up to \(2.89\times\) and \(3.88\times\) under LAN and WAN settings, respectively, and save up to \(2.75\times\) communication costs. ## I Introduction With the increasing complexity of deep neural network (DNN) models and their incredible training cost, machine learning inference as a service (MLaaS) has become an inevitable solution saving significant time, cost, and effort in democratizing ML services, even for non-experts [3]. However, increasing privacy concerns challenge the inference procedure when a client and a server hold the inference input and network separately and do not want to reveal their private properties to each other. The client could be a patient with sensitive medical data or a homeowner with private images, and the server could be a hospital system or a commercial company holding proprietary trained DNN models. Private inference (PI) has appeared to address the privacy issue in MLaaS. Existing PI frameworks [1, 2, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13] leverage secret sharing (SS) and multiparty computation (MPC) protocols to enable the participants to jointly perform inference without revealing their input and model parameters to each other. Different from prior PI frameworks [1, 2, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13] where data privacy is formally preserved through cryptographic guarantees, we adopt an empirically-defined client data privacy model [14, 15, 16] to relax PI. In particular, the client's data privacy is defined based on the potential success of inference data privacy attacks (IDPAs) [14, 15]. Specifically, if IDPAs **cannot** recover the client's input data, the client's data privacy is deemed preserved. IDPAs are typically evaluated through the structural similarity index (SSIM) [17] that measures the human perceptual similarity of two images by considering the luminance, contrast, and structure of two images. Users can set an SSIM value (usually \(0.3\)[15]) as IDPA's failure threshold, namely, an SSIM below the threshold indicating a failed recovery. The introduction of IDPA-based privacy enables a finer-grained means of quantifying client's data privacy than the Boolean characterization associated with cryptographic protocols. The inability to recover the client's input with accessible layer outputs implies that the neural network unintentionally preserves the client's input privacy, intuitively due to the irreversibility of the network. To investigate the potential for revealing the input from only later layers' activations, we first propose an improved IDPA, i.e., a distillation-based inverse-network attack (DINA) for an improved evaluation as opposed to the baseline IDPAs [14, 15]. Our attack results indicate that the server indeed often cannot disclose the client's input even if it obtains later layers' outputs. Based on this observation, we propose a novel two-party PI framework, namely, crypto-clear private inference (\(\mathbb{C}^{2}\)PI), and relax the computational burden of existing PI methods from a new perspective. Specifically, \(\mathbb{C}^{2}\)PI searches for a _boundary layer_ in a model, after which the two parties no longer need the cryptographic primitives to preserve the client's SSIM-based data privacy. This allows the server to independently operate on the remaining layers with significantly lower computation and latency. We name the layers before and after the boundary layer as _crypto layers_ and _clear layers_, respectively, with the boundary layer as the last crypto layer. Furthermore, we leverage a noise-adding mechanism to further thwart the IDPAs and enhance clients' data privacy. The benefit of our \(\mathbb{C}^{2}\)PI is three-folded: _(a) It helps to reduce computational complexity of existing PI schemes. (b) It protects the architecture of the clear layers while existing PI frameworks leak the whole network architecture to the client [1, 2, 5, 8]._ It is worth mentioning that the carefully designed network architectures are typically considered as intellectual property of network owners [18]. _(c) The introduced fine-grained privacy quantification enables users to trade-off PI complexity with the guaranteed level of client's data privacy by tuning the IDPA's failure threshold._ Existing
2310.04685
Automatic and Efficient Customization of Neural Networks for ML Applications
ML APIs have greatly relieved application developers of the burden to design and train their own neural network models -- classifying objects in an image can now be as simple as one line of Python code to call an API. However, these APIs offer the same pre-trained models regardless of how their output is used by different applications. This can be suboptimal as not all ML inference errors can cause application failures, and the distinction between inference errors that can or cannot cause failures varies greatly across applications. To tackle this problem, we first study 77 real-world applications, which collectively use six ML APIs from two providers, to reveal common patterns of how ML API output affects applications' decision processes. Inspired by the findings, we propose ChameleonAPI, an optimization framework for ML APIs, which takes effect without changing the application source code. ChameleonAPI provides application developers with a parser that automatically analyzes the application to produce an abstract of its decision process, which is then used to devise an application-specific loss function that only penalizes API output errors critical to the application. ChameleonAPI uses the loss function to efficiently train a neural network model customized for each application and deploys it to serve API invocations from the respective application via existing interface. Compared to a baseline that selects the best-of-all commercial ML API, we show that ChameleonAPI reduces incorrect application decisions by 43%.
Yuhan Liu, Chengcheng Wan, Kuntai Du, Henry Hoffmann, Junchen Jiang, Shan Lu, Michael Maire
2023-10-07T04:13:29Z
http://arxiv.org/abs/2310.04685v1
# Automatic and Efficient Customization of Neural Networks for ML Applications ###### Abstract ML APIs have greatly relieved application developers of the burden to design and train their own neural network models--classifying objects in an image can now be as simple as one line of Python code to call an API. However, these APIs offer the same pre-trained models _regardless_ of how their output is used by different applications. This can be suboptimal as not all ML inference errors can cause application failures, and the distinction between inference errors that can or cannot cause failures varies greatly across applications. To tackle this problem, we first study 77 real-world applications, which collectively use six ML APIs from two providers, to reveal common patterns of _how ML API output affects applications' decision processes_. Inspired by the findings, we propose _ChameleonAPI_, an optimization framework for ML APIs, which takes effect without changing the application source code. ChameleonAPI provides application developers with a parser that automatically analyzes the application to produce an abstract of its decision process, which is then used to devise an application-specific loss function that only penalizes API output errors critical to the application. ChameleonAPI uses the loss function to efficiently train a neural network model customized for each application and deploys it to serve API invocations from the respective application via existing interface. Compared to a baseline that selects the best-of-all commercial ML API, we show that ChameleonAPI reduces incorrect application decisions by 43%. ## 1 Introduction The landscape of ML applications has greatly changed, with the rise of ML APIs significantly lowering the barrier of ML application developers. Instead of designing and managing neural network models by themselves via frameworks like TensorFlow and PyTorch, application developers can now simply invoke ML APIs, provided by open-source libraries or commercial cloud service providers, to accomplish common ML tasks like object detection, facial emotion analysis, etc. This convenience thus gives rise to a variety of ML applications on smartphones, tablets, sensors, and personal assistants [9, 29, 64, 49]. Although ML APIs have eased the integration of ML tasks with applications, they are suboptimal by serving different applications with the same neural network models. This issue is particularly striking when applications use the ML API results to make control-flow decisions (also referred to as _application decisions_ in this paper). Different applications may check the result of the same ML API using different control-flow code structures and different condition predicates, a process that we refer to as the application's _decision process_ (see SS2 for the formal definition). Due to the heterogeneity across applications' decision processes, we make two observations. * First, some incorrect ML API outputs may still lead to correct application decisions, with only certain _critical errors_ of API output affecting the application's decision. * Second, among all possible output errors of an ML API, which ones are critical _vary_ significantly across applications that use this API. That is, the same API output error may have a much _greater_ effect on one application than on another. Figure 1 illustrates the decision process of a garbage-classification application Heagsortcypher[48]. It first invokes Google's classification API upon a garbage image. Then, based on the returned labels, a simple logic is used to make the _application decision_ about which one of the pre-defined categories (Recycle, Compost, and Donate) or others the image belongs to. For example, for an input image whose ground-truth label is "Shirt", the correct application decision is Donate, as shown in Figure 1 (b). For this application, when the classification API fails to return "Shirt", the application decision may or may not be wrong. For example, Figure 1 (c) and (d) show two possible wrong API output: if the output is "Paper", the application will make a wrong decision of Recycle; however, if the output is "Jacket", the application will make the correct decision of Donate despite not matching the ground-truth label. More subtly, if the API returns a list of two labels, "Shirt" and "Paper", the application would make a correct decision if "Shirt" is ordered before "Paper" by the API, but would make a wrong decision if "Paper" is ordered before "Shirt". The reason is that the application logic, the for loop in Figure 1 (a), checks one API-output label at a time. As we will see later, there are also other ways that applications check the API-output list, which will affect application decision differently. As we can see, for a specific application, some errors of an ML API may be critical, like mis-classifying the shirt to "Paper" in the example above, and yet some errors may be non-critical, like mis-classifying the shirt image as "Jacket" or classifying the shirt image as both "Shirt" and "Paper" in the examples above. Which errors are critical varies, depending on the application's decision process. These observations regarding the critical errors specific to each application suggest substantial room for improvement by customizing the ML API, essentially the neural network model underneath the API, for individual application's decision process. In particular, for a given application, the customized model can afford having more errors less critical to the application for the benefit of having fewer critical errors that cause wrong application decisions. Thus, our goal is to allow ML APIs and their underlying neural network models to be _automatically_ customized for a given application, so as to _minimize_ incorrect application decisions _without_ changing the application's source code or interface between ML API and software exposed to developers. This way, application developers who do not have the expertise to design and train customized ML models can still enjoy the accessibility of generic ML APIs while getting closer to the accuracy of ML models customized for the application. No prior work shares the same goal as us. The closest line of prior work specializes DNN models for given queries [7, 8, 36, 37, 42], but they require application developers to use a domain specific language (_e.g.,_ in SQL [36]) instead of general programming languages, like Java and Python, and mostly focus on reducing the DNN's size. In contrast, we keep both the ML API interface and the application source code intact while avoiding incorrect decisions for ML applications. With the aforementioned goal, this paper makes two contributions. _First,_ we run an empirical study over 77 real-world applications that collectively use six ML APIs to reveal several common patterns of how the outputs of ML APIs affect the application decisions (SS2). Our study identifies two types of ML API output that are used by applications to make control-flow decisions (categorical labels and sentiment scores), and three types of decision types (True-False, Multi-Choice, and Multi-Selection) with different implications regarding which ML API output errors are critical to the application. Our study also quantitatively reveals opportunities of model customization. (1) Although popular image-classification models are trained to recognize as many as 19.8K different labels, the largest number used by any one application for decision making is only 54. Consequently, mis-classification among the remaining tens of thousands of labels are completely irrelevant to an application. (2) More importantly, applications tend to treat multiple labels (4.7 on average) as one equivalence class in their decision making, such as labels Plastic,Wood,Glass,Paper, and Cardboard in Figure 1(a). Mis-classification among those labels inside one equivalence class does not matter. (3) Which labels are relevant to an application's decision making vary greatly across applications, with only 12% of application pairs share any labels used for their decision making. Second, inspired by the empirical study, we propose ChameleonAPI, which customizes and serves ML models behind the ML API for each given application's decision process, without any change to the existing ML API or the application source code (SS3). ChameleonAPI works in three steps. First, it provides a parser that analyzes application source code to extract information about how ML inference results are used in the application's decision process. Based on the analysis result, ChameleonAPI then constructs the loss function to reflect which ML model output is more relevant to the given application as well as the different severity of ML inference errors on the application decisions. The ML model will be retrained accordingly using the new loss function. Finally, when the ML API is invoked by the application at runtime, a customized ML model will be used to serve this query. We evaluate ChameleonAPI on 57 real-world open-source applications that use Google and Amazon's vision and language APIs. We show that ChameleonAPI's re-trained models reduce 48% of incorrect decisions compared to the off-the-shelf ML models and 50% compared to the commercial ML APIs. Even compared with a baseline that selects the best-of-all commercial ML API, ChameleonAPI reduces 43% of incorrect decisions. ChameleonAPI only takes up to 24 minutes on a GeForce RTX 3080 GPU to re-train the ML model. ## 2 Understanding Application Decision Process We conduct an empirical study to understand how applications make decisions based on ML APIs (SS2.3), and how this decision making logic implies the different severity of ML inference errors (SS2.4). This study will reveal why and how to customize the ML API backend for each application. As Figure 1: _An example ML application whose decision depends on the output of ML API (multi-label classification), but not all errors of ML API output have the same effect._ a representative sample of ML APIs, this study focuses on cloud AI services due to their popularity. ### Definitions #### 2.1.1 Preliminaries: We begin with basic definitions. * _Application decision_: the collective control-flow decisions (i.e., which branch(es) are taken) made by the application under the influence of a particular ML API output. * _Incorrect ML API output_: a situation when the API output differs from the API input's human-labeled ground truth. We refer to such ML API outputs as _API output errors_. * _Correct decision_: the application decision if the API output is the same as the human-labeled ground-truth of the input. * _Application decision failure_: a situation when the application decision is different from the correct decision, also referred to as _application failure_ for short in this paper. #### 2.1.2 Software decision process: Given these definitions, an application's _software decision process_ (or decision process for short) is the logic that maps an ML API output to an application decision. The code snippet in Figure 1 shows an example decision process, which maps the output of a classification ML API on an image to the image's recycling categorization specific to this application. #### 2.1.3 Critical and non-critical errors: For a given decision process, some API output errors will still lead to a correct decision, whereas some API output errors will lead to an incorrect decision and hence an application failure. We refer to the former as _non-critical errors_, and the latter as _critical errors_. ### Methodology Our work focuses on applications that use ML API output to make control-flow decisions. To this end, we look at 77 open-source applications which collectively use six widely used vision and language APIs [10, 64] offered by two popular cloud AI service providers, as summarized in Table 1. These applications come from two sources. First, we study all 50 applications that use vision and language APIs from a recently published benchmark suite of open-source ML applications [65]. Second, given the popularity of image classification APIs [11, 12], we additionally sample 27 applications from GitHub that use Google and Amazon image classification APIs (16 for the former and 11 for the latter). We obtain these 27 by checking close to 100 applications that use image classification APIs and filtering out those that directly print out or store the API output. Every application in our benchmark suite uses exactly one ML API for decision making. #### 2.1.4 Threats to validity: While many applications use the APIs listed in Table 1, there are a few other APIs not covered in our study. A few vision and language-related ML tasks are not as popular and hence are not covered in our study (_e.g.,_ face recognition and syntax analysis). Speech APIs are not covered, because their outputs are rarely used to affect application control flow based on our checking of open-source applications. Finally, our study does not cover applications that use ML APIs offered by other cloud or local providers. ### Understanding the decision mechanism #### 2.1.5 Q1: What types of ML API outputs are typically used by applications to make decisions? ML APIs produce output of a variety of types. The sentiment analysis API outputs a list of floating-point value pairs (score and magnitude), describing the sentiment of the whole document and every individual sentence; the other five APIs in Table 1 each produces a list of categorical labels ranked in descending order of their confidence scores, which is also part of the output. Some APIs' output also contains other information, like coordinates of bounding boxes, entity names, links to Wikipedia URLs, and so on. Among all these, only two types have been used in application decision processes of our studied application: the floating-point pair (score and magnitude) and the categorical labels. For the 63 applications that use categorical-label output from the five APIs (all except analyze_sentiment in Table 1), they each define one or more _label lists_ and check which label list(s) an API output label belongs to. The code snippet of a landmark classification application in Figure 2(a) is an example of this. It calls the label_detection API with a sight-seeing image and checks the output labels to see if the image might contain Landmark, or just ordinary Building, or Person. For the 14 applications that use the analyze_sentiment API, they each define several _value ranges_ and check which range the sentiment score and/or magnitude falls in. The code snippet of FoodDelivery [47] in Figure 2(b) is an example. This application calls analyze_sentiment with a restaurant review text, and then checks the returned sentiment score to judge if the review is negative, positive, or neutral. #### 2.1.6 Q2: What type of decisions do applications make? We observe three categories of ML-based decision making, which we name following common question types in exams: (1) True-False decision, where a single label list or value range is defined and one selection is allowed: either the ML API output belongs to this list/range or not. This type occurs in about one third of the applications in our study. For example, the plant management application Plant-watcher [56] (Figure 2(c)) checks to see if the image contains plants or not. \begin{table} \begin{tabular}{l l r r} ML API name & ML task & Provider & \# of apps \\ \hline label\_detection & Vision:Image classification & Google & 29 \\ detect\_labels & Vision:Image classification & Amazon & 11 \\ object\_localization & Vision:Object detection & Google & 8 \\ analyze\_sentiment & Language:Sentiment analysis & Google & 14 \\ analyze\_entities & Language:Entity recognition & Google & 6 \\ classify\_text & Language:Text classification & Google & 9 \\ \end{tabular} \end{table} Table 1: _Summary of applications used in our empirical study._ (2) Multi-Choice decision, where multiple lists of labels or value ranges are defined, and one selection is allowed. The ML API output will be assigned to _at most one_ list or range; the application's decision making logic determines which of these lists/ranges the output belongs to, or determines that the output belongs to none of them. This type of decision is the most common, occurring in about 45% of benchmark applications. The garbage classification application discussed in SS1 makes such a Multi-Choice decision. It decides which one of the following classes the input image belongs to: Recycle, Compost, Donate, or none of them. (3) Multi-Select decision, where multiple label lists or value ranges are defined, and _multiple_ selections are allowed about which label lists or value ranges the ML API output belongs to. This type of decisions occur in close to a quarter of the applications. Figure 2(d) illustrates such an example from the nutrition advisor application The-Coding-Kid[61]. This application defines three label lists to represent nutrition types: Protein, Grain, and Fruit, and it checks to find all the nutrition types present in the input image. In the remainder of the paper, we will use **target class** to refer to a label list (or a value range) that is used to match against a categorical label (or a value). For instance, the code snippet in Figure 2(a) has three label lists as its target classes ([Landmark, Sculpture], [Building, Estate, Mansion], and [Person, Lady]), and the code snippet in Figure 2(b) has three value ranges as its target classes (<0.3,>0.6, and in between). _Q3: How do applications reach Multi-Choice decisions?_ When the ML API outputs multiple labels, the outcome of a Multi-Choice decision varies depending on which _matching order_ is used. First, the matching order can be determined by the API output. For example, the garbage classification application (Figure 1) first checks whether the first label in the API output matches any target class. If so, later API output labels will be skipped, even if they might match with a different class. If there is no match for the first label, the second output label is checked, and so on. These labels are ranked by the API in the descending order of their associated confidence scores, so we refer to such a matching order as API-order. It is used by 80% of applications that make Multi-Choice decisions. The matching order can also be specified by the application, referred to as App-order. For instance, regardless the API output, application Aander-ETL[1] (Figure 2(a)) always first checks if the Landmark class matches with _any_ output label. If there is a match, the decision is made. Only when it fails to match Landmark, will it move on to check the next choice, Building, and so on. This matching order is used by 20% of applications that make Multi-Choice decisions. ### Understanding the decision implication _Q4: Does an application need ML APIs that can accurately identify thousands of labels?_ ML models behind popular ML APIs are well trained to support a wide range of applications. For example, Google and Microsoft's image-classification APIs are capable of identifying more than 10000 labels [43], while Amazon's image-classification API can identify 2580 labels [3]. However, for each individual application, its decision making only requires classifying the input image into a handful of target classes: 7 at most in our benchmark applications. The largest number of image-classification labels checked by an application is 54, a tiny portion of all the labels an image-classification API could output. Clearly, for any application, a customized ML model that focuses on those target classes used by the application's decision process has the potentially to out-perform the big and generic ML model behind ML APIs. How to accomplish the customization without damaging the accessibility of ML APIs will be the goal of ChameleonAPI. _Q5: Are there equivalence classes among ML API outputs in the context of application decision making?_ For the 63 applications that make decisions based on API output of categorical labels, they present 121 target classes in total, each containing 4.7 labels on average (3 being the median). Only 35 target classes in 22 applications contain a single label. For the 14 applications that make decisions based on floating-point sentiment score and magnitude, their target classes _all_ contain an infinite number of score or magnitude values. In other word, no class contains just a single value. Figure 2: _Code snippets from five example applications where ML API output affects control flow decisions in different ways._ Clearly, the wide presence of multi-value target classes creates equivalence classes among output returned by the API--errors within one equivalence class are _not_ critical to the corresponding application. This offers another opportunity for ML customization. _Q6: How much difference is there between different applications' target classes?_ Overall, the difference is significant. We have conducted pair-wise comparison between any two applications in our benchmark suit, and found that 88% of application pairs share _no_ common labels in any of their target classes. Similarly, among the 381 labels that appear in at least one application's target classes, 88% of them appear in only one application (_i.e.,_ 335 out of 381 labels). Clearly, there is _little overlap_ among the target classes of different applications, again making a case for _per-application_ customization of the ML models used by the ML APIs. _Q7: Do different decision mechanisms imply different sensitivity to output errors of ML APIs?_ Even for two applications that have the same target classes, if they try to make different types of decisions, they will have different sensitivity to ML API output errors--some API errors might be critical to one application, but not to the other. For example, errors that affect the selection of different target classes are equally critical to Multi-Select decisions. However, this is not true for Multi-Choice, where only the first matched target class matters. Furthermore, the matching order of a Multi-Choice decision affects which errors are critical. When the API-output order is used (_e.g.,_HeapsortCyber in Figure 1), an error on the first label in the API output is more likely to be critical than an error on other labels in the output. However, when App-order order is used (_e.g.,_Aander-ETL in Figure 2(a)), errors related to labels in the first target class (_e.g.,_Landmark) are more likely to be critical than those related to labels in later target classes (_e.g.,_Person). Clearly, to customize ML models for each application, we need to take into account what is the decision type and what is the matching order (for Multi-Choice decisions). ## 3 Design of ChameleonAPI Inspired by the study of SS2, we now present ChameleonAPI which automatically customizes ML models for applications. ### Problem formulation **Goal:** For an application that uses ML APIs, our goal is to **minimize critical errors** in the API outputs for this application by efficiently re-training the original generic neural network models underneath these APIs into customized models; our approach stands in contrast to typical approaches that minimize _all_ inference errors. In other words, the new ML model should return outputs that lead the application process to the same decision as if the ground-truth of the input is returned by the ML API. To formally state this objective, we denote how an application makes a decision by \(App(API(\mathbf{x}))\), where \(\mathbf{x}\) is the input to the ML API and \(API(\mathbf{x})\) is the API output. Then for a given application decision process of \(App(\cdot)\) and an input set \(\mathbf{X}^{1}\), our goal is to train an ML model \(DNN(\cdot)\) such that \[\min_{\mathbf{x}_{i}\in\mathbf{X}}\left|\{\mathbf{x}_{i}|App(API( \mathbf{x}_{i}))\neq App(\widehat{API}(\mathbf{x}_{i}))\}\right|,\] \[\text{where }API(\mathbf{x}_{i})=F(DNN(\mathbf{x}_{i})) \tag{1}\] Here, \(\widehat{API}(\mathbf{x}_{i})\) is a hypothetical API function that always returns the ground truth of input \(\mathbf{x}_{i}\), and \(F(\cdot)\) represents the postprocessing used by the API to translate a DNN output to an API output. For instance, an image classification model's output is a vector of confidence scores between 0 and 1 (each for a label), but the ML API will use a threshold \(\theta\) to filter and return only labels with scores higher than \(\theta\), or the top \(k\) labels with the highest confidence scores. Our goal in Eq 1 differs from the traditional goal of an ML model, which minimizes any errors in the API output, _i.e.,_ \[\min_{\mathbf{x}_{i}\in\mathbf{X}}\left|\{\mathbf{x}_{i}|API(\mathbf{x}_{i} )\neq\widehat{API}(\mathbf{x}_{i})\}\right|. \tag{2}\] Given that it is hard to obtain a DNN with 100% accuracy, the difference between the two formulations is crucial, since not all API output errors in Eq. 2 will cause incorrect application decisions in Eq. 1. Thus, compared to optimizing Eq. 2, optimizing Eq. 1 is more likely to focus the DNN training on reducing the critical errors for the application. To train a DNN that optimizes Eq. 1, we need to decide if a DNN inference output \(DNN(\mathbf{x})\) is a critical error or not (_i.e.,_\(App(DNN(\mathbf{x}))\neq App(\widehat{API}(\mathbf{x}))\)) at the end of _every_ training iteration. This decision needs to be made automatically and efficiently. For example, repeatedly running the entire ML application after every training iteration would not work, as it may significantly slow down the training procedure. **Logical steps of ChameleonAPI:** To customize and deploy the DNN for an application, ChameleonAPI takes three logical steps (Figure 3). First, ChameleonAPI extracts from an Figure 3: The logical steps of how ChameleonAPI customizes ML APIs for individual applications. application's source code a _decision-process summary_ (explained shortly), a succinct representation of the application's decision process, which will be used to determine if a DNN inference error is critical (details in SS3.3). Second, ChameleonAPI converts a decision-process summary to a _loss function_, which can be directly used to train a DNN (details in SS3.2). This loss function only penalizes DNN outputs that lead to critical errors with respect to a given application. Finally, the loss function will be used to train a customized DNN for this particular application's ML API invocations (SS3.4). **A decision-process summary** is a succinct abstraction of the application that contains enough information to determine if a DNN inference output causes a critical error or not. Specifically, it includes three pieces of information (defined in SS2.3): * _Composition of target classes:_ the label list or value range of each target class; * _Decision type:_ True-False, Multi-Choice, or Multi-Select; * _Matching order:_ over the target classes, API-order or Apporder, if the application makes a Multi-Choice decision. For a concrete example, the decision-process summary of the garbage classification application in Figure 2(a) contains (1) three label lists representing three target classes: Recycle, Compost, and Donate; (2) the Multi-Choice type of decision; and (3) the matching order of API-order. **What is changed, what is not:** ChameleonAPI does _not_ change the ML API or the application source code. Unlike recent work that aims to shrink the size of DNNs or speed them up [36, 53, 37], we do not change the DNN architecture (shape and input/output interface); instead, we train the DNN to minimize critical errors. That said, deploying ChameleonAPI has two requirements. First, the application developers need to run ChameleonAPI's parser script to automatically extract the decision-process summary. Second, an ML model needs to be retrained for each application, instead of serving the same model to all applications. The remainder of this section will begin with the design of the application-specific loss function based on decision-process summary, followed by how to extract the decision-process summary from the application, and finally, how the customized ML models are used to serve ML API queries. ### Application-specific loss function Given Eq 1, ChameleonAPI trains a DNN model with a _new loss function_, which only penalizes critical errors of an application, rather than all DNN inference errors. Since decision processes vary greatly across applications (SS2.4), we first explain how to conceptually capture different decision processes in a generic description, which allows us to derive the mathematical form of ChameleonAPI's loss function later. **Generalization of decision processes:** For each application in our study (SS2.2), our insight is that its decision process can always be viewed as traversing a sequence of conditional _checks_ until an _end-of-decision_ (_EOD_) occurs: \[1^{\text{st}}\text{ check:} <\mathbf{y},c^{(1)}>\to M^{(1)}\] \[\ldots\] \[j^{\text{th}}\text{ check:} <\mathbf{y},c^{(j)}>\to M^{(j)} \in\mathbf{EOD}\] \[\ldots\] where the \(j\)-th check takes as input the DNN output \(\mathbf{y}\) and one of target classes \(c^{(j)}\), and returns a binary \(M^{(j)}\) indicating whether \(\mathbf{y}^{(j)}\)_matches_ the condition of \(c^{(j)}\) and a binary decision whether this check happens before the EOD. The set of target classes successfully matched before the EOD will be those selected by the application. Figure 4 shows (a) an example application, (b) the decision-process summary, and (c) the generic description for this application's decision process and a DNN output. This generic description (_e.g._, the traversal order of the target classes, how a match is determined in a check, and when the EOD occurs) will depend on the information in the decision-process summary and the DNN output \(\mathbf{y}\). We stress that this generic description may _not_ apply to all applications, but it does apply to all applications in our study (SS2.2). **Categorization of critical errors:** Importantly, this generic description helps to categorize critical errors: * _Type-1 Critical Errors:_ A correct target class \(c\) is not matched before EOD, but will be so if EOD occurs later. * _Type-2 Critical Errors:_ A correct target class \(c\) is never matched, before or after the EOD. * _Type-3 Critical Errors:_ An incorrect target class \(c\) is matched before EOD. A useful property of this categorization is that any wrong decision (a correct target class not being picked, or an incorrect Figure 4: The generic description (shown in (c)) of an application (whose source code is shown in (a) and decision-process summary in (b)) on a DNN inference output \(\mathbf{y}\). target class being picked) falls in a unique category, and non-critical errors do not belong to any category. In other words, as long as the loss function penalizes the occurrences of each category, it will only capture critical errors. **ChameleonAPI's first attempt of a new loss function:** To understand why it is difficult to penalize critical errors and critical errors _only_, we first consider the common practice of assigning a higher weight to the loss of a DNN output if the ground-truth of the input will lead to a selection of some target classes (_e.g.,_[26, 35, 63]). Henceforth, we refer to this basic design of loss function as \(\text{ChameleonAPI}_{basic}\). At best, \(\text{ChameleonAPI}_{basic}\) might improve the DNN's _label-wise_ accuracy on inputs whose ground-truth decision selects some target classes. However, as elaborated in SS2.3, we also need to consider which labels belong to the same target class, the decision type, and the matching order of an application decision process in order to capture the three types of critical errors. For instance, in the garbage-classification application (Figure 1), without knowing the label lists of each target class, \(\text{ChameleonAPI}_{basic}\) will give an equal penalty to a critical error of mis-classifying a \(\mathtt{Paper}\) image to \(\mathtt{Wood}\) and a non-critical error of mis-classifying a \(\mathtt{Paper}\) image to \(\mathtt{Shirt}\). Similarly, without knowing the matching order, \(\text{ChameleonAPI}_{basic}\) will equally penalize the output of [plastic, Jacket] and [Jacket, Plastic], but only the latter leads to correct output because Jacket is matched first. **ChameleonAPI's loss function:** ChameleonAPI leverages the categorization of critical errors to systematically derive a loss function that penalizes each type of critical error. To make it concrete, we explain \(\text{ChameleonAPI}\)'s loss function of "label-based API, \(\text{Multi-Choice}\) type of decision, and \(\text{App-order}\)" (_e.g.,_ Figure 4). \(\text{Appendix}\)\(\lx@sectionsign\)B will detail the loss functions of other decision processes. The loss function of such applications has three terms, each penalizing one type of critical error: \[L(\mathbf{y})= \overbrace{\text{Sigmoid}\left(\min\left(\max_{l\in\cup_{c=G_{ c}}}\mathbf{y}[l],\max_{l\in G_{c}}\mathbf{y}[l]\right)-\mathbf{\theta}\right)}^{ \text{\bf Type-2 Critical Errors}}+\overbrace{\text{Sigmoid}\left(\mathbf{ \theta}-\max_{l\in G_{c}}\mathbf{y}[l]\right)}^{\text{\bf Type-3 Critical Errors}}+\overbrace{\text{Sigmoid}\left(\mathbf{\theta}-\max_{l\in G _{c}}\mathbf{y}[l]-\mathbf{\theta}\right)}^{\text{\bf Type-1 Critical Errors}}\] Here, \(\mathbf{y}[l]\) denotes the score of the label \(l\), \(G_{c}\) denotes the set of labels of target class \(c\), \(\hat{c}\) denotes the correct (_i.e.,_ ground-truth) target class, and the sigmoid function \(\text{Sigmoid}(x)=\frac{1}{1+\hat{c}^{\hat{c}}}\) will incur a higher penalty on a greater positive value. _Why does it capture the critical errors?_ Given this application is \(\text{Multi-Choice}\), the EOD will occur right after the first match of a target class, _i.e.,_ the first check with a \(c\) such that \(\max_{l\in G_{c}}\mathbf{y}[l]\geq\mathbf{\theta}\). * A Type-1 critical error occurs, if (1) the correct target class \(\hat{c}\) is matched _and_ (2) it is matched after the EOD. First, the correct target class \(\hat{c}\) is matched, if and only if at least one of its labels has a score above the confidence threshold, so \(\max_{l\in G_{c}}\mathbf{y}[l]\geq\mathbf{\theta}\). Second, this match happens after the break, if and only if some target class \(c\) before \(\hat{c}\) (_i.e.,_\(c<\hat{c}\)) is matched, so \(\max_{l\in G_{c}}\mathbf{y}[l]\geq\mathbf{\theta}\)). Put together, the first term of Eq 3 penalizes any occurrence of these conditions. * A Type-2 critical error occurs, if no label in the correct target class \(\hat{c}\) has a score high enough for \(\hat{c}\) to be matched, _i.e.,_\(\max_{l\in G_{c}}\mathbf{y}[l]<\mathbf{\theta}\), so the second term of Eq 3 penalizes any occurrence of this condition. * A Type-3 critical error occurs, if any incorrect target class \(c\) before \(\hat{c}\) (_i.e.,_\(c<\hat{c}\)) has a label with a score high enough for \(c\) to be matched, _i.e.,_\(\max_{l\in G_{c}}\mathbf{y}[l]-\mathbf{\theta}\), so the third term of Eq 3 penalizes any occurrence of this condition. To train a DNN, the loss function must be differentiable with respect to the DNN output \(\mathbf{y}\). Eq 3 uses the max function several times. Though max is not naturally differentiable, it can be closely approximated in well-known differentiable forms provided by PyTorch's differentiable operators [55]). ### Extracting applications' decision process The current prototype of ChameleonAPI program analysis supports Python applications that make decisions based on categorical label output or floating point output of ML APIs. We first discuss how it works for ML APIs with categorical label output, like all the APIs in Table 1 except for \(\text{analyze\_sentiment}\). We will then discuss a variant of it that works for most use cases of \(\text{analyze\_sentiment}\). Given application source code, ChameleonAPI first identifies all the invocations of ML APIs. For every invocation \(I\) in a function \(f\), ChameleonAPI then identifies all the branches whose conditions have a data dependency upon the ML API's label output. We will refer to these branches as \(I\)-branches. If there is no such branch in \(f\), ChameleonAPI then checks the call graph, and analyzes up to 2 levels of callers and up to 5 levels of callees of \(f\) until such a branch is identified. If no such branch is identified after this, ChameleonAPI considers the ML API invocation \(I\) to not affect application decisions and hence does not consider any optimization for it. If some \(I\)-branches are identified, ChameleonAPI records the top-level function analyzed, \(F\), and moves on to extract the decision-process summary in following steps. **What are the target classes?** ChameleonAPI figures out all the target classes and their composition in two steps. The first step leverages symbolic execution and constraint solving to identify all the labels that belong to _any_ target classes. Specifically, ChameleonAPI applies symbolic execution to function \(F\), treating the parameters of \(F\) and the label output of \(I\) as symbolic (_i.e.,_ the symbolic execution skips the ML API invocation \(I\) and directly uses \(I\)'s symbolic output in the remaining execution of \(F\))2. Since applications typically match only one label in API output at a time (as ob served in SS2.3), we set the label array returned by \(I\) to contain one element (label) and use a symbolic string to represent it. Through symbolic execution, ChameleonAPI obtains constraints for every path that involves an \(I\)-branch, solving which tells ChameleonAPI which labels need to be in the output of the ML API in order to execute each unique path, essentially all the labels that belong to any target class. One potential concern is that a solver may only output one instead of all values that satisfy a constraint. Fortunately, the symbolic execution engine used by ChameleonAPI, NICE [31], turns Python code into an intermediate representation where each branch is in a simplest form. Take Figure 2(d) as an example, the source-code branch if obj.name in Protein is transformed into three branches where obj.name is compared with "Hamburger", "Meat", and "Patty" separately, allowing us to capture all three labels by solving three separate path constraints. The second step groups these labels into target classes by comparing their respective paths: if two API output, each with one label, lead the program to follow the same execution path at the source-code level, these two labels belong to the same target class. For example, in Figure 2(d), the execution path is exactly the same when the label_detection API returns ["Hamburger"], comparing with when it returns ["Meat"], with all function parameters and other API output fields being the same. Consequently, we can know that label Hamburger and label Meat belong to the same target class. To figure out the path, ChameleonAPI simply executes function \(F\) using each input produced by the constraint solver and traces the source-code execution path using the Python trace module. One final challenge is that ChameleonAPI needs to identify and exclude the path where none of the target classes are matched (_e.g.,_ the "It is others." path in Figure 2(a)). We achieve this by carefully setting the default solution in the constraint solver to be an empty string, which is impossible to output for any ML APIs in this paper. This way, whenever this default solution is output, ChameleonAPI knows that the corresponding path matches no target class. **What is the type of decision?** When only one target class is identified, ChameleonAPI reports a True-False decision type. Otherwise, ChameleonAPI decides whether the decision type is Multi-Choice or Multi-Select by checking the source-code execution path associated with every target class label obtained above. If any execution evaluates an \(I\)-branch _after_ another \(I\)-branch is already evaluated to be true, ChameleonAPI reports a Multi-Select decision type; otherwise, ChameleonAPI reports a Multi-Choice decision type. **What is the matching order over the target classes?** To tell whether a Multi-Choice decision is made through API-Order like in Figure 1 or App-Order like in Figure 2(a), ChameleonAPI first identifies all the for loops that iterate through the label array output by the ML API and have control-dependency with \(I\)-branches, _e.g.,_ the for l in labels in Figure 2(a) and the for obj in response.label_annotations in Figure 1. ChameleonAPI then checks how many such output-iterating loops there are. If there is only one and this loop is not inside another loop, like that in Figure 1, ChameleonAPI considers the matching order to be API-Order, as the application only iterates through each output label once, with the matching order determined by the output array arranged by the ML API. Otherwise, ChameleonAPI considers the matching order to be App-Order. This is the case for the example shown in Figure 2(a), where three output-iterating loops are identified, each of which matches with one target class in an order determined by the application: the Landmark target class, followed by the Building, and finally the Person. **How to handle floating-point output of ML APIs?** Recall in SS2.3 that some ML APIs, _e.g.,_analyze_sentiment, have floating-point output and the application defines several value ranges to put each floating-point output into one category. To handle this type of API, ChameleonAPI needs to identify the value range of each target class, which is not supported by NICE and other popular constraint solvers. Fortunately, many applications directly compare API output with constant values in \(I\)-branches, giving ChameleonAPI a chance to infer the value range. For these applications, ChameleonAPI first extracts those constant values that are compared with API output in \(I\)-branches, _e.g.,_ 0.3 and 0.6 in Figure 2(b). ChameleonAPI then forms tentative value ranges using these numbers, like -1 - 0.3, 0.3 - 0.6, and 0.6 - 1 for Figure 2(b) (-1 and 1 are the smallest and biggest possible score output of analyze_sentiment based on the API manual). To confirm these value ranges and figure out the boundary situation, ChameleonAPI then executes function \(F\) with all the boundary values, as well as some values in the middle of each range. By comparing which values lead to the same execution path, ChameleonAPI finalizes the value ranges. For the example in Figure 2(b), after executing with score set to -0.35, 0.3, 0.45, 0.6, and 0.8, ChameleonAPI settles down on the final value ranges to be: (-1,0.3), [0.3,0.6), and [0.6,1). **Limitation** The static analysis in ChameleonAPI does not handle the iterated object of while loops, unfolded loops, and recursive functions. For complexity concerns, ChameleonAPI only checks caller and callee functions with limited levels, and hence may miss some \(I\)-branches far away from the API invocation. ChameleonAPI's ability of identifying target classes is limited by the constraint solver. ChameleonAPI assumes different source-code paths correspond to different target classes, which in theory could be wrong if the application behaves exactly the same under different execution paths. ### Putting them together We put these components together into a ML-as-a-Service workflow shown in Figure 5. First, when an application (\(A\)) is developed or updated, the developers run a parser (described in SS3.3) provided by ChameleonAPI on \(A\)'s source code to extract the decision process summary for \(A\). The developers can then upload the decision-process summary to ChameleonAPI's backend together with a unique application ID3 (which will later be used to identify queries from the same application). Footnote 3: In many MLaaS offerings [2, 20], a connection between the application and the MLaaS backend is commonly created before the application issues any queries. Existing MLaaS already allows applications to specify the application ID via the connection between the application and backend. ChameleonAPI's backend then uses the received decision-process summary to construct a new application-specific loss function (described in SS3.2). When a DNN is trained using the new loss function, its inference results will lead to fewer critical errors (_i.e._, incorrect application decisions) for application \(A\). In our prototype, ChameleonAPI uses the new loss function to re-train an off-the-shelf pre-trained DNN, a common practice to save training time (see SS5 for quantification). The DNN re-training uses an application-specific dataset sampled from the dataset used by the pre-trained generic DNN (see Table 2 and SS4), so that each target/non-target class is selected by ground-truth decisions of the same number of inputs. Finally, ChameleonAPI backend maintains a set of DNN models, each customized for an application and keyed by the application ID. When application \(A\) invokes an ML API at run time, the ChameleonAPI backend will use the application ID associated with the API query to identify the DNN model customized for \(A\), run the DNN on the input, and return the inference result of the selected model to the application. Note that, ChameleonAPI can also be used to customize ML models that run locally behind the ML APIs, instead of those in the cloud through ML service providers. In this case, developers run the ChameleonAPI parser on their application and save the parser's result into a local file. This local file will then be consumed to help re-train an off-the-shelf DNN into a customized DNN to serve the application. ## 4 Implementation **Extractor of decision-process summary:** The current prototype of ChameleonAPI is implemented for Python applications that use Google or Amazon ML APIs. It takes as input the application source code and returns as output the decision-process summary in the JSON format. It uses NICE symbolic execution engine [31] and CVC5 constraint solver [5] to identify target classes, and uses Python static analysis framework Pyan [46] and Jedi [24] to identify the decision type and the matching order. Particularly, it identifies the object that is iterated through by a for-loop through the iter expression in each for-loop header, which is used to distinguish Multi-Choice and Multi-Select decisions and the matching order. **ML re-training:** The re-training module is implemented in PyTorch v1.10 and CUDA 11.1. It uses a decision-process summary to construct a new loss function (see SS3.2), and then replaces the builtin loss function in Pytorch with the new loss function, and uses the common forward and backward propagation procedure to re-train an off-the-shelf pre-trained DNN model (explained next). **Generic models:** Without access to the models and the training data used by commercial ML services, we use open-sourced pre-trained DNNs and their training datasets as a proxy, which are summarized in Table 2. These DNNs are trained on the "training" portion of their respective datasets. They are trained to achieve good accuracy over a wide range of labels, and we have confirmed that their accuracies in terms of application decisions are similar to the real ML APIs (SS5.2). **Training data:** We make sure that the labels included in these datasets cover the labels used in the decision processes of the applications in our study. An exception is text classification: to our best knowledge, there is no open-source dataset that covers the classes in Google's text classification API. Instead, we use the Yahoo Question topic classification dataset [30], whose classes are similar to those used in the applications. Instead of training DNNs on all training data, most of which do not match any target classes of an application, we create a downsampled training set for ChameleonAPI and ChameleonAPI\({}_{basic}\). For each application, we randomly sample (without replacement) its training data such that each target class and the non-target class (not matching any target class) is the correct decision for the same number of training inputs, which depending on applications, ranges from 12K to 40K. With such training set, ChameleonAPI\({}_{basic}\) will be equivalently implemented by training on the downsampled training \begin{table} \begin{tabular}{l l l} \hline \hline & Dataset & Generic model \\ \hline Image Classification & OpenImages [43] & TResNet-L [6] \\ \hline Object Detection & COCO [14] & Faster-RCNN [57] \\ \hline Sentiment Analysis & Amazon review [38] & BERT [18] \\ \hline Text Classification & Yahoo [30] & BERT [18] \\ \hline Entity Recognition & conll2003 [62] & BERT [18] \\ \hline \hline \end{tabular} \end{table} Table 2: _The ML APIs and datasets in evaluation._ Figure 5: _Workflow of ChameleonAPI._ set using the conventional loss function (_i.e.,_ cross-entropy loss for classification tasks). Moreover, the downsampled training set significantly speedups DNN re-training (SS5.2). ## 5 Evaluation Our evaluation aims to answer following questions: How much can ChameleonAPI reduce incorrect application decisions? How long does it take ChameleonAPI to customize DNN models for applications? and Why does ChameleonAPI reduce incorrect application decisions where ChameleonAPI\({}_{basic}\) falls short? ### Setup **Applications:** We have applied ChameleonAPI on all the 77 applications summarized in Table 1. Due to space constraints, our discussion below focuses on the 57 applications that involve three popular ML tasks, image-classification, object-detection, and text-classification, and omits the remaining 20 applications that involve sentiment analysis and entity recognition. The results of the latter show similar trends of advantage from ChameleonAPI and are available in Appendix SSD. **Metrics:** For each scheme (explained shortly) and each application, we calculate the _incorrect decision rate_ (_IDR_): the fraction of testing inputs whose application decisions do not match the correct application decisions (_i.e.,_ decisions based on human-annotated ground truth). **Schemes:** We compare the results of these schemes: * _Various commercial ML APIs_: the results returned by ML APIs of three service providers (Google [20], Amazon [2] and Microsoft [50]). * _Best-of-all API_*: a hypothetical method that queries ML APIs from those three service providers on each input and picks the best output based on the classic definitions of accuracy: label-wise recall for classification tasks and mean-square-error of floating-point output for sentiment analysis. This serves as an idealized reference of recent work [11, 12], which tries to select the best API output with high label-wise accuracy. * _Generic models_: the open-sourced generic model based on which the next three schemes are re-trained. They serve as a reference without customization and achieve similar accuracy as commercial APIs. Their details are explained in Section 4. * _Categorized models_: This scheme pre-trains a number of specialized models. Each specialized model replaces the last layer of the generic model so that it outputs the confidence scores for a smaller number of labels representing a common category (e.g., "dog", "animal", "person" and a few other labels represent the "natural object" category), and is fine tuned from the generic model accordingly. A simple parser checks which labels are used by an application. If all the labels belong to one category, the corresponding model specialized for this category is used to serve API calls from this application. If the labels belong to multiple categories, multiple specialized models will be used, which we will explain more later. We set up 35 categories for image classification and 7 categories for object detection based on the Wikidata knowledge graph [67], as well as 15 categories for text classification based on the inherent hierarchy in Google text-classification output. More details of how we have designed these categories are available in the Appendix SSC. Note that, we have designed this scheme to represent a middle-point in the design space between the generic model and the ChameleonAPI approach: on one hand, this scheme offers some application customization, but not as much as ChameleonAPI (e.g., which labels belong to the same target class, what is the decision process, and what is the matching order used by the application are all ignored); on the other hand, this scheme requires a simpler parser compared to ChameleonAPI. * _ChameleonAPI\({}_{basic}\)_: the model is re-trained with ChameleonAPI's training data, which concentrates on labels used by the application, but with the conventional loss function. Like Categorized models, this scheme only needs a simple parser that extracts which labels are used by the application, and does not make use of other application information that ChameleonAPI uses. Unlike Categorized models, this scheme prepares a customized model for each application, instead of relying on a small number of categorized models. * _ChameleonAPI_ (our solution): the model re-trained with our training data and loss function. **Testing data:** For the same application, all schemes are tested against the same testing input set. The testing set of an application is randomly sampled from the "testing" portion of the dataset associated with the application's generic model (Table 2). We make sure that _no_ testing input appears in the training data. Like the creation of training data of ChameleonAPI (SS4), by default, we randomly sample the testing data such that each target class and the non-target class (not matching any target class) appear as the correct decision for the same number of testing inputs, which ranges from 1.2K to 4K. This is similar to the testing sets used in related work on ML API (_e.g.,_[11, 36, 37, 65]). Such data downsampling is commonly used in ML [19, 45]. Other than Figure 9, we will use this as the default testing dataset. **Hardware setting:** We evaluate ChameleonAPI and other approaches on a GeForce RTX 3080 GPU, and an Intel(R) Xeon(R) E5-2667 v4 CPU, with 62GB memory. ### Results **Overall gains:** Measured by the average incorrect decision rate (IDR) across all applications, the most accurate scheme is ChameleonAPI, with an IDR of 0.16, and the least accurate scheme is Generic models, with an IDR of 0.31. In other words, ChameleonAPI successfully reduces the number of incorrect decisions of its baseline model by almost 50%. ChameleonAPI\({}_{basic}\) (0.22), Categorized models (0.28), and Best-of-all API* (0.28) have IDR rates in between. The advantage of ChameleonAPI, and even ChameleonAPI\({}_{basic}\), over the other schemes is consistent across all three types of applications that make different types of decisions, as shown in Table 3. In fact, ChameleonAPI offers the highest accuracy by a clear margin for every single application in our evaluation, as shown in Figure 6. To better compare the ChameleonAPI approach with Categorized models, we divide the 57 applications into two types: (1) 39 single-category applications -- each application uses labels that belong to one category and hence can benefit from one specialized model in the Categorized models scheme; (2) 18 multi-category applications -- each application uses labels that belong to multiple categories. For these applications, the Categorized models scheme feeds the API input to multiple specialized models and combines these models' output to form the API output. As shown in Table 4, the Categorized models scheme does offer improvement from Generic models by considering which labels belong to an application's target classes, particularly for single-category applications. However, both ChameleonAPI and ChameleonAPI\({}_{basic}\) perform better than Categorized models for both single-category and multi-category applications--the per-application customization in ChameleonAPI and ChameleonAPI\({}_{basic}\) has paid off. The above advantage of ChameleonAPI over ChameleonAPI\({}_{basic}\) and Categorized models shows that the \begin{table} \begin{tabular}{c c c} \hline \hline & Single-category & Multi-category \\ \hline Generic models & 0.32 & 0.28 \\ Categorized models & 0.28 & 0.27 \\ ChameleonAPI\({}_{basic}\) & 0.24 & 0.18 \\ ChameleonAPI & 0.17 & 0.14 \\ \hline \hline \end{tabular} \end{table} Table 4: Average IDR among single-category and multi-category applications. The lower the better. \begin{table} \begin{tabular}{c c c c} \hline \hline & True-False & Multi-Choice & Multi-Select \\ \hline Google API & 0.29 & 0.32 & 0.35 \\ Microsoft API & 0.30 & 0.33 & 0.32 \\ Amazon API & 0.31 & 0.33 & 0.36 \\ Best-of-all API* & 0.26 & 0.27 & 0.31 \\ \hline Generic models & 0.29 & 0.30 & 0.34 \\ Categorized models & 0.24 & 0.27 & 0.31 \\ ChameleonAPI\({}_{basic}\) & 0.19 & 0.22 & 0.27 \\ **ChameleonAPI** & **0.13** & **0.16** & **0.21** \\ \hline \hline \end{tabular} \end{table} Table 3: Average incorrect decision rate (IDR) among apps. that make different types of decisions. The lower the better. The top half represents commercial APIs and their idealistic combinations; the bottom half represents open-source models. Figure 6: ChameleonAPI reduces the incorrect decision rate (IDR) on the 57 applications that use Google’s or Amazon’s image-classification, text-classification, and object-detection APIs. static analysis used in ChameleonAPI to extract not only what labels are used by the application, but also which labels belong to the same target class, the decision type, and the matching order, as described in Section 3.3, is worthwhile. **Cost of obtaining customized models:** The customization effort of ChameleonAPI includes two parts (1) extracting the decision-process summary from application source code, and (2) re-training the ML model. The first part takes a few seconds: on an Intel(R) Xeon(R) E5-2667 v4 CPU machine, our parser extracts the decision-process summary from every benchmark application within 10 seconds. The second part takes a few minutes, much faster than training a neural network from scratch. As shown in Figure 7, re-training DNNs for the 21 applications in Figure 6(a) on a single RTX 3080 GPU takes 8 to 24 minutes. Focusing on a small portion of all possible labels (SS2.4), ChameleonAPI fine-tunes pre-trained models using much less training data than the generic models and thus needs fewer iterations to converge. Considering that a V100 GPU with similar processing GFLOPS as our RTX 3080 GPU only costs SS2.38 per hour on Google Cloud [21], re-training an ML model for one application costs less than SS1. **Cost of hosting customized models:** For cloud providers, ChameleonAPI would incur a higher hosting cost than traditional ML APIs by serving a customized DNN for every application instead of a generic DNN for all applications. The extra cost includes more disk space to store customized neural network models. For example, each image-classification model in ChameleonAPI uses 115 MB of disk space. So, for \(n\) applications, \(115\cdot n\) MB of disk space may be needed to store ChameleonAPI customized models. The extra cost also involves more GPU resources. A naive design of using one GPU to exclusively serve requests to one customized neural network model will likely lead to under-utilization of GPU resources. To serve different applications' customized models on one GPU, we need to pay attention to memory working set and performance isolation issues. In our experiments on an RTX 3080 GPU, loading an image-classification model from CPU to GPU RAM takes 18 to 40 ms (inference itself takes 10 to 35 ms with a batch size of 1). Fortunately, modern GPU has sufficiently large RAMs to host several requests to different customized models simultaneously: in our experiments, the peak memory consumption of one inference request is less than 2GB. Furthermore, the majority of the model inference memory consumption comes from intermediate states, instead of the model itself. Consequently, the memory consumption of multiple inference requests on different models is similar to that on the same model. Of course, ChameleonAPI can take advantage of recent proposals to improve GPU sharing [70, 15, 54, 72] as well as to reduce the footprint of serving multiple DNNs [33]. These techniques could be advantageously employed by ChameleonAPI to determine the optimal degree of sharing among customized DNNs, and we leave them to future work. Finally, there is also the extra cost of needing more complex software to manage the DNN serving. For instance, ChameleonAPI needs to dynamically route each request to a GPU that serves the DNN of the application (see SS3.4). **Precision-recall tradeoffs:** Traditionally, for a trained ML model, it is common to vary the confidence-score thresholds in order to find the best precision-recall tradeoff of a trained model. Thus, it is important that ChameleonAPI also achieves better precision-recall tradeoffs. Figure 8 shows the precision-recall results in each target class of a particular application, by varying the detection threshold \(\theta\) (defined in SS3.2) of two baselines (real APIs are excluded, because we cannot change their thresholds and their IDR is not as low as ChameleonAPI\({}_{basic}\)). ChameleonAPI's tradeoffs are better than both baselines (and we observe similar results in other applications). Note that since ChameleonAPI's loss function uses an assumed \(\theta\), we do not vary the \(\theta\) when testing it; instead, we re-train five DNNs of ChameleonAPI, each with a different \(\theta\) and test them with their respective thresholds. **Understanding the improvement:** ChameleonAPI's unique advantage is that it factors in the decision process of an application, including not only the target classes but also the decision type and the matching order. Next, we use two case studies to further reveal the underlying tradeoffs made by ChameleonAPI to achieve its improvement on application-decision accuracy. First, ChameleonAPI reduces errors related to different target classes differently depending on their different roles in the application decision process. This effect is particularly striking in Multi-Choice applications with the matching order of App-Order, where the first target class is always matched Figure 8: _Precision-recall trade-off for HeapsortCypher._ Figure 7: _Re-training time for applications in Figure 6(a)._ against API output. Thus, when the correct target class is not the first one, falsely including a label that belongs to the first target class will more likely be a critical error than other mis-classifications, because it will block the match of other target classes. To illustrate this, we consider the Multi-Choice application of Aander-ETL. We increase the percentage of testing inputs whose correct action is the first target class or the last target class. Figure 9 shows that increasing the portion of inputs of the last target class (Person) generally leads to bigger gains of ChameleonAPI, whereas increasing the portion of the first target class (Landmark) does the opposite. This shows the application itself already has good tolerance to mis-classification on inputs that belong to the first target class, but not to mis-classification on the inputs that belong to later target classes, which is exactly where ChameleonAPI can help. Second, recall from SS3.2 that our loss function helps to minimize critical errors, even at the cost of missing labels that do not affect application decisions (_i.e._, non-critical errors). To show this, we define _label error rate_ on an image as the fraction of the image's ground-truth labels that are missed by the DNN output (a label list). We consider IoTWor (explained in Table 5), which similar to Aander-ETL makes Multi-Choice decisions with App-Order matching order. The average label missing rate of ChameleonAPI on our testing images is 0.21, which is slightly higher than ChameleonAPI\({}_{basic}\)'s 0.18. This means ChameleonAPI makes more label-level mistakes than ChameleonAPI\({}_{basic}\). However, our IDR (0.17) is 44% lower than ChameleonAPI\({}_{basic}\), which means ChameleonAPI makes far fewer critical errors. ## 6 Related Work Due to space constraints, we discuss related papers that have not been discussed earlier in the paper. **Optimizing storage and throughput of DNN serving:** Various techniques have been proposed to optimize the delay, throughput, and storage of ML models via model distillation [39, 60, 53], pruning [26] or cascading [4, 7]. This line of work explores a different design space than ChameleonAPI: they design ML models with higher inference speed or smaller model size with minimum loss in accuracy. ChameleonAPI focuses on re-training existing ML models such that the rate of incorrect decisions of a given application is reduced. **Application-side optimization:** Recent work also proposes to change the applications to better leverage existing ML APIs. One line of work [11, 12, 68] invokes ML APIs from different service providers to achieve high accuracy within a query cost budget. Another line of work aims to eliminate misuse of ML APIs in applications [64, 65]. They require changes to the application source code (_e.g._, changing the API input preparation, switching from image-classification API to object-detectin API, etc.). They are complementary to our work, because we customize the ML-API backend DNN and do not require changes on the application's source code. **Measurement work on MLaaS:** For their rising popularity, ML-as-a-Service platforms have also attracted many measurement studies to understand accuracy [10], performance [69], robustness [28], and fairness [40]. However, they have so far not taken in account the ML applications that use ML APIs, and is thus different from our empirical study of ML applications in SS2. Previous work that studies ML applications [64] did not look into the decision making process and how ML API errors might affect different applications differently. Finally, a myriad of techniques have been studied to better manage and schedule GPU resources in ML training/serving systems (_e.g._, [13, 16, 17, 22, 25, 27, 32, 41, 44, 52, 58, 59, 66, 71, 73]). They aim for different goals than ChameleonAPI, but these techniques can be used to help ChameleonAPI train and serve the application-specific ML models. ## 7 Conclusion ML APIs are popular for its accessibility to application developers who do not have the expertise to design and train their own ML models. In this paper, we study how the generic ML models behind ML APIs might affect different applications' control-flow decisions in different ways, and how some ML API output errors may or may not be critical due to the application decision making logic. Guided by this study, we have designed ChameleonAPI that offers both the accuracy advantage of a custom ML model and the accessibility of the traditional ML API.
2310.16295
Instance-wise Linearization of Neural Network for Model Interpretation
Neural network have achieved remarkable successes in many scientific fields. However, the interpretability of the neural network model is still a major bottlenecks to deploy such technique into our daily life. The challenge can dive into the non-linear behavior of the neural network, which rises a critical question that how a model use input feature to make a decision. The classical approach to address this challenge is feature attribution, which assigns an important score to each input feature and reveal its importance of current prediction. However, current feature attribution approaches often indicate the importance of each input feature without detail of how they are actually processed by a model internally. These attribution approaches often raise a concern that whether they highlight correct features for a model prediction. For a neural network model, the non-linear behavior is often caused by non-linear activation units of a model. However, the computation behavior of a prediction from a neural network model is locally linear, because one prediction has only one activation pattern. Base on the observation, we propose an instance-wise linearization approach to reformulates the forward computation process of a neural network prediction. This approach reformulates different layers of convolution neural networks into linear matrix multiplication. Aggregating all layers' computation, a prediction complex convolution neural network operations can be described as a linear matrix multiplication $F(x) = W \cdot x + b$. This equation can not only provides a feature attribution map that highlights the important of the input features but also tells how each input feature contributes to a prediction exactly. Furthermore, we discuss the application of this technique in both supervise classification and unsupervised neural network learning parametric t-SNE dimension reduction.
Zhimin Li, Shusen Liu, Kailkhura Bhavya, Timo Bremer, Valerio Pascucci
2023-10-25T02:07:39Z
http://arxiv.org/abs/2310.16295v1
# Instance-wise Linearization of Neural Network for Model Interpretation ###### Abstract Neural network have achieved remarkable successes in many scientific fields. However, the interpretability of the neural network model is still a major bottlenecks to deploy such technique into our daily life. The challenge can dive into the non-linear behavior of the neural network, which rises a critical question that how a model use input feature to make a decision. The classical approach to address this challenge is feature attribution, which assigns an important score to each input feature and reveal its importance of current prediction. However, current feature attribution approaches often indicate the importance of each input feature without detail of how they are actually processed by a model internally. These attribution approaches often raise a concern that whether they highlight correct features for a model prediction. For a neural network model, the non-linear behavior is often caused by non-linear activation units of a model. However, the computation behavior of a prediction from a neural network model is locally linear, because one prediction has only one activation pattern. Base on the observation, we propose an instance-wise linearization approach to reformulates the forward computation process of a neural network prediction. This approach reformulates different layers of convolution neural networks into linear matrix multiplication. Aggregating all layers' computation, a prediction complex convolution neural network operations can be described as a linear matrix multiplication \(F(x)=W\cdot x+b\). This equation can not only provides a feature attribution map that highlights the important of the input features but also tells how each input feature contributes to a prediction exactly. Furthermore, we discuss the application of this technique in both supervise classification and unsupervised neural network learning parametric t-SNE dimension reduction. ## 1 Introduction Neural network techniques have achieved remarkable success across many scientific fields [13, 26, 30, 6]. Furthermore, many applications (e.g., autopilot, debt loan, cancer detection, criminal justice,...) which utilize this technology, starts to involve into our daily life and even more in the expected future. However, the uninterpretable behavior of the neural network prediction causes many concerns. For example, recently, a few countries' governments have published AI act [1, 38] to regulate AI systems and require automate systems must provide an explanation for why it makes such a decision. Meanwhile, improving the interpretability of the neural network can also benefit scientific fields for knowledge discover, such as drug discover [36, 2] A key challenge of the neural network interpretability is the non-linear behavior of the neural network model. These non-linear behaviors are caused by different activation patterns which are triggered by different inputs, and these diverse activation patterns lead to an uninterpretable behavior. A classical approach to explain neural network prediction is the feature attribution method [16, 22], which assigns an important score to each input feature and highlight the most important features. However, different feature attribution approaches may end up with different feature maps, and how input features are processed by the neural network model internally is mysterious [41]. For a single neural network prediction, the neural network model has only single activation pattern. The input features which are processed by the neural network layer by layer during the prediction is linear. It raises a question that whether the computation process of a neural network can generate an answer of how each input feature contributes to the prediction. In this study, we propose an approach to reformulate the decision process of a neural network prediction and reformulate the forward computation process of a neural network prediction. Our approach is based on an observation that a neural network prediction has only one activation pattern. As the Figure 1 shows, the non-linear ReLU activation components of a neural network prediction can be consider as linear components. Therefore, the non-linear matrix multiplication of a neural network performs on a single prediction is linear process. The whole computation process can be reformulated and aggregated into a linear equation \(F(x)=W\cdot x+b\). We find that this linear equation can be used to explain how the input feature is used by the neural network model to make prediction. In this paper, we discuss how to reformulate the forward propagation of a neural network layer by layer and discuss the potential challenge of this computation. We propose an efficient approach to calculate \(W^{*}\), \(b^{*}\), and discuss the role of \(W^{*}\) and \(b^{*}\) during model prediction. Our approach is much more flexible and can be applied to supervise learning task or unsupervised learning task. At the end, we demonstrate a use case of how neural network models capture input feature in different layer of network models and how researchers can use our technique to understand the behavior of neural network models. ## 2 Relative Work Because of the black-box nature of the neural network model, understanding how it makes prediction is an important but challenge topic. In this section, we discuss previous techniques that have been proposed in the literature to address this challenge. ### Feature Attribution Feature attribution is a classical approach, which generates a heat map to highlight the importance of input feature for a model prediction. Different methodology has been designed to calculate the heat map. Gradient base approaches [31, 34, 25, 33] calculate the model gradient of an input with respect to the target label and use or accumulate the gradient values to highlight the importance of each input feature. Perturbation base approaches [20, 7, 8, 39, 22] ablate or modify a part of the input and observe the output variation to understand the contribution of each input feature to the prediction. Other approach like SHAP [16], Deep-Lift [29], and LRP [3] provide a feature attribution map from different angle. However, because of the lacking of ground truth, whether a feature attribution approach highlights the real important regions of an input is still under exploration. Understanding features that captured by a neuron of neural network can also improve the interpretability of neural network. Feature visualization [19] optimizes an input image by maximizing a neuron's activation value. The output of the optimization provides information about what features are captured by the neuron. Previous researchers [4, 40] have also measured the alignment between individual neuron unit and semantic concepts. Fong et al. [7] applies input perturbation to measure the reaction of a neuron to these perturbation and capture the input regions that contribute most to these activation. ### Local Linearity of Neural Network Researchers have investigated the local linearity of ReLU network, which mainly focus on the complexity of the model such as approximating the number of linear regions [23, 28, 9, 17, 21, 27]. Previous researches have also aligned local linearity with input perturbation to understand a model's prediction robustness and generalization. Novak, et al [18] performs a large scale experiments on different neural network models to show that input-output Jacobian norm of samples is correlated with model generalization. Lee, et al. [15] design algorithm to expand the local linear region of neural network model. ## 3 Instance-wise Linearization In this section, we discuss how to transform different layers of neural network into linear operation under this condition. After transforming all layers, we demonstrate how to aggregate all linear operations into a linear matrix multiplication (1). In this study, we mainly focus our discussion on convolution neural network, which is the de-faco setting for many computer vision tasks. \[F(x)=W\cdot x+b \tag{1}\] A series operations of a neural network can be represented as a nest function \(F(x)=f_{n}\cdot f_{n-1}....f_{2}\cdot f_{1}\) and n is the number of layers in neural network. The detail operation of each layer can be generalized as \(f_{i}=\sigma(W_{i}\cdot x_{i}+b_{i})\) Figure 1: A prediction of neural network model has only one activation pattern and the complex prediction operation can be simplified as a linear matrix multiplication. \(W_{i}\), \(x_{i}\) and \(b_{i}\) are the weight, feature representation and bias of i-th layer. \(\sigma\) is the activation function of neural network model. ### Fully Connected Layer To explain above statement, we use a fully connected layer as an example. Our discussion mainly focus on the activation functions such as ReLU, GELU, SELU and ELU. \[y=\sigma(W_{i}\cdot x_{i}+b_{i}) \tag{2}\] \(W_{i}\cdot x_{i}\) is the dot product operation. \(x_{i}\) is the feature vector, and \(\sigma\) is the activation function that changes the computation output. For each activation function, the computation can be transformed to linear matrix multiplication case by case, and the equation (2) is rephrased as \(y=\lambda(W_{i}\cdot x_{i}+b_{i})\). Here, the variable \(\lambda\) is a re-scale factor which represents the impact of activation function on the computation result. ### Convolution Layer The convolution layer can be considered as a sparse fully connected layer and transformed into the linear matrix multiplication. Assume the kernel size is (c, k, k), the stride size is s, and the input tensor with a size \(channel\times width\times height\) can be flatten as a channel * width * height dimension vector \(x\). Each output element of a convolution operation can be considered as a dot product of vector multiplication \(w_{i}\cdot x+b_{i}\). Here, \(w_{i}\) is the same size as the vector \(X\) and most of elements are zero except elements which are operated by the kernel operation. The overall convolution operation can be converted into \(W\cdot x+b\). In the equation, \(W\) is the toeplitz matrix which is consisted of \(w_{i}\), and \(b\) is consisted of \(b_{i}\). The convolution operation also comes with the activation function, so overall equation is \(ReLU(W\cdot x+b)\). Similar to equation (1) and (2), can be rephrase above equation as \(W^{{}^{\prime}}\cdot x+b^{{}^{\prime}}\). In Figure 2, we demonstrate how to transform a 2x2 convolution kernel with stride 2 on a 4x4 matrix input into a dot product between a matrix and a vector. The bias term is set to zero. The original 4x4 matrix can be flatten into a 16 dimension vector \(a\). The first convolution kernel operation on the matrix x can be converted into a vector \(w_{1}\) and the overall convolution operation is converted into a matrix \(W\). ### Pooling Layer The pooling operation in the neural network can be describe as matrix multiplication \(W\cdot x\). Assume the pooling kernel size is (c, k, k), the stride size is s, and the input tensor with a size \(channel\times height\times width\) can be flatten as a channel*height*width dimension vector \(x\). We use the maximum pooling as an example to discuss transformation process and the other pooling (e.g., average pooling) operations should be similar. An element of output tensor is a dot product of indicator vector \(W_{(i,j,k)}\) (i,j,k is the index of the output tensor) indicates the elements of input \(x\) that are selected by the maximum pool operation. Therefore, the pooling operation can be rephrased as \(W\cdot x\). In Figure 3 demonstrate a case which transforms a 2x2 maximum pooling operation with stride 2 on a 4x4 matrix into a dot product between a matrix and a vector. The original 4x4 matrix can be flatten into a 16 dimension vector **x**. The first maximum pooling operation on the matrix X can be convert into a vector \(w_{1}\) and the overall maximum pooling operation is converted into a matrix \(W\), Figure 3: A maximum pooling operation can be rephrased as a matrix vector multiplication. Performing a 2x2 maximum pooling operation with stride size 2 on a 4x4 input matrix can be rephrased as \(W\cdot a\) Figure 2: A convolution operation can be rephrased as a matrix vector multiplication. Performing a 2x2 convolution operation with stride size 2 on a 4x4 input matrix can be rephrased as a dot product between matrix and input vector \(W\cdot x+b\). ### Skip Connection Skip connection is a critical component in the ResNet [10] architecture. In Fig. 4 is an example of the skip connection and the equation can be representation as \(F(x)+x\). In the equation, \(F(x)\) is often consisted of convolution layer, batch normalization, ReLU activation and Full connected layers. As we discuss in previous section, these layers can be simplified as a linear matrix multiplication \(F(x)=W_{skip}\cdot x+b_{skip}\). The overall equation can be rephrase as \[(W_{skip}+I)\cdot x+b_{skip}\] Merging the identify matrix \(I\) into \(W_{skip}\) and the equation can be simplified into \(W^{{}^{\prime}}_{skip}\cdot x+b_{skip}\). ### Batch Normalization Layer Batch normalization [12] is a critical component in neural network model which makes the training easy and efficient. The inference process of the batch normalization layer is an element-wise linear transformation in equation 3. The equation that used to perform batch normalization on a single value is described as following: \[y=\frac{x_{i}-\mu}{\sqrt{\sigma^{2}+\epsilon}}*\gamma+\beta=\frac{\gamma}{ \sqrt{\sigma^{2}+\epsilon}}*x_{i}+\frac{-\mu\gamma}{\sqrt{\sigma^{2}+\epsilon }}+\beta \tag{3}\] In the equation, \(\mu\), \(\sigma^{2}\), \(\gamma\), and \(\beta\) are calculated during the training process. These variables are available during the network inference. This equation can be rephrased as \(y=w_{i}^{norm}*x_{i}+b_{i}^{norm}\) and \(w_{i}^{norm}=\frac{\gamma}{\sqrt{\sigma^{2}+\epsilon}}\), \(b_{i}^{norm}=\frac{-\mu\gamma}{\sqrt{\sigma^{2}+\epsilon}}+\beta\). The batch layer operation can be represented as: \[W^{norm}\odot x+b\] ### Layer Aggregation We have mentioned that a neural network can be defined as a nest function \(F(x)=f_{n}\cdot f_{n-1}....f_{2}\cdot f_{1}\) and each layer's function can be generalized as following: \[f_{i}=\sigma(W_{i}\cdot x_{i}+b_{i})\] With our previous description, for a single prediction, each layer can be replaced as a linear matrix multiplication \[f_{i}=W^{{}^{\prime}}_{i}\cdot x_{i}+b^{{}^{\prime}}_{i}\] and the overall equation can be rephrased as \[F(x)=W^{{}^{\prime}}_{n}\cdot W^{{}^{\prime}}_{n-1}...W^{{}^{\prime}}_{2}\cdot W ^{{}^{\prime}}_{1}\cdot x+\sum_{i=1}^{n-1}(W^{{}^{\prime}}_{n}...W^{{}^{\prime }}_{i+1}\cdot b_{i})+b_{n}\] After linearization of a neural network prediction, the prediction is represented as \(F(x)=W\cdot x+b\). In the equation, \(W=W^{{}^{\prime}}_{n}\cdot W^{{}^{\prime}}_{n-1}\cdot...W^{{}^{\prime}}_{2} \cdot W^{{}^{\prime}}_{1}\) and \(b=\sum_{i=1}^{n-1}(W^{{}^{\prime}}_{n}...W^{{}^{\prime}}_{i+1}\cdot b_{i})+b_ {n}\). For piece-wise linear activation function, the inference behavior of neural network model is locally linear. In a local region, the behavior of neural network on input x is equal to \(F(x)=W\cdot x+b\). **Lemma 1**: _For a network with piece-wise linear activation function, for \(x\in R^{m}\), there is a maximum \(|\delta|\in R^{m}\) such that \(\forall\theta,|\theta|<|\delta|\) and \(x+\theta\) has the same activation pattern as \(x\). The matrix \(W\) can be used to explain the prediction of \(x+\theta\) and \(x\)._ Under this condition, the feature attribution \(W\) is equal to input-output Jacobian matrix \(J_{F}(x)\), and \(W\) not only tells the sensitivity of input with respect to the model prediction but also represents how neural network use input to make prediction exactly. ReLU networks split the input space into linear regions [17]. In each linear region, these samples share the same explainable matrix \(W\). \[J_{F}(x)=[\frac{\partial F(x)_{1}}{\partial x},\frac{\partial F(x)_{2}}{ \partial x},...,\frac{\partial F(x)_{n}}{\partial x}]^{T}=W\] For piece-wise linear activation function, a prediction of neural network has an equivalent input and output linear mapping \(y=W\cdot x+b\). However, activation function such as GELU, SELU and ELU can not use Jacobian matrix to calculate \(W\). Therefore, the equation for \(y=W\cdot x+b\) need to be calculated layer by layer. ### Ensemble Model A prediction of ensemble neural network model can be represented as \[F(x)=\sum_{i=1}^{n}a_{i}F_{1}(X)\] . \(F_{i}\) is the individual neural network and the \(a_{i}\) is the share assigned to the prediction of the \(i_{th}\) neural network. As we discuss in section 3.8, the equation can be rephrased as \[F(x)=\sum_{i=1}^{n}a_{i}(W^{*}_{i}\cdot x+b^{*}_{i})=(\sum_{i=1}^{n}a_{i}W^{*}_ {i})\cdot x+\sum_{i=1}^{n}a_{i}b^{*}_{i}\] Figure 4: A skip connection component in ResNet architecture. ## 4 Experiment The neural network prediction can be represented as a linear matrix multiplication \(F(x)=W\cdot x+b\). How do \(W\cdot x\) and \(b\) impact the decision of the neural network prediction is important to interpreted the decision of the network model. To evaluate the impact of these two terms in a network prediction, we perform experiments on multiple neural network architectures that trained with MNIST, cifar10, cifar100, and Imagenet datasets. ### Decompose the Model Prediction Previous section has mentioned that a prediction is described as \(F(X)=W\cdot x+b\). To evaluate the impact of these two terms, we compare the original prediction accuracy of neural network \(F(x)\) with the prediction of \(W\cdot x\) and \(b\). During the experiment, we use label flip rate (LFR) to track the prediction difference between the select term and the original prediction. The label flip rate is defined as the number of prediction change during the decision process. A smaller LFR value indicates a similar prediction result with the original model. In the experiment, we use stochastic gradient decent(SGD) to train the neural network model. For the MNIST dataset, each model is trained with 50 epochs. The neural network models used for cifar10 and cifar100 are trained with 200 epochs. We use the pre-train models from Pytorch to evaluate the ImageNet dataset. LeNet300 and LeNet5 are the classical models that used to train MNIST dataset. Both models are trained without batch normalization layer. In Table 1, we compare two models' prediction results with \(b\) and \(W\cdot x\). The \(W\cdot x\) is model accuracy without bias term and \(W\cdot x_{LFR}\) is the label flip rate compared with _accuracy_ which is the original accuracy of the model. From the evaluation result, we can tell that after removing the bias term, the label flip rates \(W\cdot x_{LFR}\) are 0.0082 and 0.0012. The \(W\cdot x\) accuracy and original accuracy is similar. In the other hand, the impact of \(b\) is an insignificant part of the prediction and the \(b_{LFR}\) is large. However, the observation from MNIST dataset does not generalize to large models and complex datasets. In cifar10, and cifar100 dataset, we use popular convolution architectures (VGG [32], ResNet [10], and DeseNet [11]) to compare the performance of \(W\cdot x\) and \(b\). From the results of Table 2 and Table 4, we can tell that bias term \(b\) dominates the prediction behavior of a neural network model. Except the network such as LeNet5 and AlexNet, which have relative large impact in both \(b\) and \(W\cdot x\), the rest of model shows that \(b\) has the dominant impact during the model prediction. Using the information from \(b\) can determine the majority prediction of the model. In the other hand, the impact of \(W\cdot x\) along is not enough to determine the model prediction. comes from the batch normalization layers. For each value of \(b_{i}\), its value is constant and determined once the model is trained. In the equation, the variance that updated is \(W^{\prime}_{i}\) which is triggered by the activation of the different input. \(W=W^{{}^{\prime}}_{n}\cdot W^{{}^{\prime}}_{n-1}...W^{{}^{\prime}}_{2}\cdot W^{{}^ {\prime}}_{1}\) contains the unique footprint of a model's reaction to a prediction. An alternative approach to improve the sufficient of \(W\) is to train a neural network without batch normalization layer. Since the prediction of these networks can be represented as \(F(x)=W\cdot x+b_{n}\) and \(b_{n}\) is the bias term in the last layer of neural network. We can use the \(W\) to explain the network behavior directly. A potential approach to improve the performance of such models include techniques such as weight normalization [24], initialization and other batch normalization free techniques. Previous researches have demonstrate that network training without batch normalization can still achieve state-of-art performance [5] ## 5 Applications In this section, we discuss the application of our proposed approach in supervise task, unsupervised task, and ensemble neural network model prediction. ### Supervised Learning - Image Prediction For MNIST dataset, we use our method to understand the decision of LeNet5, which the encoder layer does not include bias term. The accuracy of the model is 0.9904. For MNIST dataset, \(W\) for the final prediction is a \((10,784)\) matrix and for each digit outcome, \(W\) will give an explanation. Each row of \(W\) tells how a neural network use input feature to tell the prediction score for that digit. Our feature attribution has natural semantic meaning. The value, which assigned to the input pixel, multiplied with input pixel value is the value of current pixel contribute to the prediction. A positive value has a positive contribution to the prediction and a negative value has a negative contribution. Figure 6 demonstrates how LeNet5 processes input digits image 1 and 7 for different label decision. In the heat map visualization, the red color indicates the positive contribution and the blue color tells the negative contribution. From the result, we can tell that both digits are recognized by the neural network model in the right shape. It property of our approach is that compute the our feature attribution, it needs to compute the contribution of input to each neuron in the neural network first. The final feature attribution is a summary of all previous' neuron's contribution. This property brings the convenient to not only understand how neural network use input feature for decision but also provides how each network neuron use input feature to produce activation output. In Figure 7, we display our feature attribution of digits 7 and how top 5 most activate neurons in different layers of LeNet5 use input features. ### Unsupervised Learning - Parametric Dimension Reduction Dimension reduction such as t-SNE is a popular approach to understand the structure of neural network model. However, the classical t-SNE algorithm is non-parametric. Therefore, how the input is mapped into two dimensional space is unknown. The uninterpretable process of t-SNE dimension reduction cause confusion and misleading during the analysis. Comparing with t-SNE algorithm, parametric t-SNE perform similar operations by training a neural network model to perform the dimension reduction. In this work, we mainly discuss the neural network architecture in Figure 5 to perform the dimension reduction. Many works [37, 35, 14] have developed to implement parametric t-SNE. The loss function (equation (1)) that used to train parametric t-SNE try to minimize the probability distribution difference between the original high dimension data and the projected low dimension data. \[L(\theta)=\sum_{i\neq j}p_{ij}log\frac{p_{ij}}{q_{ij}} \tag{4}\] \[p_{j|i}=\frac{exp(-||x_{i}-x_{j}||^{2}/2\sigma_{i}^{2})}{\sum_{k\neq i}exp(-||x _{i}-x_{k}||^{2}/2\sigma_{i}^{2})} \tag{5}\] \[p_{ij}=\frac{p_{j|i}+p_{i|j}}{2N} \tag{6}\] \[q_{ij}=\frac{exp(-||f_{\theta}(x_{i})-f_{\theta}(x_{j})||^{2}/2\sigma_{i}^{2}) }{\sum_{k\neq i}exp(-||f_{\theta}(x_{i})-f_{\theta}(x_{k})||^{2}/2\sigma_{i}^{ 2})} \tag{7}\] Because of the non-linear properties of the neural network model. Understanding the relationship between input and output of these networks trained with different loss function is difficult. In the literature, limited work focus on understanding how input contribute to the final two dimension output. Gradient is the common approach to perform feature attribution of the neural network model. However, it is difficult to generate gradient for a single sample with Figure 5: The neural network architecture that used to perform auto encoder training and parametric t-sne dimension reduction. the loss function that used to train neural network for parametric t-SNE. Furthermore, parametric t-SNE [37] may use neural network with an encoder with unsupervised approach to pretrain a network model, then use the neural network to perform the dimension reduction. The overall process involve two neural network model which make the interpretability of the parametric t-SNE difficult. Our approach is flexible enough to fill this gap by concatenate two network's matrix multiplication to understand the process of the parametric dimension reduction and the final projection can still be phased as \(y=Wx+b\) for each sample's behavior. What features are used during dimension reduction process is important to interpret the dimension reduction result. In Fig. 8, we apply parametric t-SNE dimension reduction approach on iris dataset and use our proposed method to generate the feature attribution for dimension reduction result. t-SNE dimension reduction often projects samples that are similar to each other to the nearby location. From visualization, samples include (d), (e), (f) are nearby and the feature attribution result over these samples are similar. However, samples (a), (b), (c) are nearby with very different feature attribution result. ## 6 Conclusion In this work, we propose an approach to reformulating the forward propagation computation to understand a neural network prediction. Our study find that a prediction of neural network can be rephrased as a series of matrix multiplication. For each input instance, its output have a straight forward mapping which can be phrased as \(y=W\cdot x+b\). At the end, we demonstrate the flexibility of our approach on Figure 8: iris flower dataset is projected into two dimensional space with parametric t-SNE. With our feature attribution, it highlights the critical features that are used for dimension reduction. Figure 6: How neural network use input features to make a prediction in MNIST dataset with LeNet5. Figure 7: Displaying the features that captured by top 5 positive neurons and 5 negative neurons with their contribution to a model’s final prediction in different layers of the LeNet5 model. how this approach can help use to understand the supervise classification task, and unsupervised dimension reduction task. ## Acknowledgement Utah funding support, This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. The work is partially supported by Laboratory Directed Research and Development Program under tracking code 23-ERD-029.
2308.02914
Anomaly Detection in Global Financial Markets with Graph Neural Networks and Nonextensive Entropy
Anomaly detection is a challenging task, particularly in systems with many variables. Anomalies are outliers that statistically differ from the analyzed data and can arise from rare events, malfunctions, or system misuse. This study investigated the ability to detect anomalies in global financial markets through Graph Neural Networks (GNN) considering an uncertainty scenario measured by a nonextensive entropy. The main findings show that the complex structure of highly correlated assets decreases in a crisis, and the number of anomalies is statistically different for nonextensive entropy parameters considering before, during, and after crisis.
Kleyton da Costa
2023-08-05T16:12:26Z
http://arxiv.org/abs/2308.02914v2
# Anomaly Detection in Global Financial Markets with Graph Neural Networks and Nonextensive Entropy ###### Abstract Anomaly detection is a challenging task, particularly in systems with many variables. Anomalies are outliers that statistically differ from the analyzed data and can arise from rare events, malfunctions, or system misuse. This study investigated the ability to detect anomalies in global financial markets through Graph Neural Networks (GNN) considering an uncertainty scenario measured by a nonextensive entropy. The main findings show that the complex structure of highly correlated assets decreases in a crisis, and the number of anomalies is statistically different for nonextensive entropy parameters considering before, during, and after crisis. ## 1 Introduction One of the main challenges in different fields is anomaly detection Chandola et al. (2009), especially in systems influenced by high-dimensional data Deng and Hooi (2021). When these variables are observed in multivariate (or high-dimensional) time series data the task becomes even more difficult. Anomaly detection in high-dimensional data can be applied in the detection of financial fraud in bank transfers, monitoring of patients in intensive care units, monitoring of industrial processes, identification of outbreaks of infectious diseases, and traffic control on highways and railways. Anomaly detection is related to identifying abnormal behaviors in a dataset. We consider outliers the data points that statistically differ from the data. These outliers can arise from at least three causes: (i) the occurrence of rare events (such as in the case of an economic crisis or pandemic), (ii) malfunction (such as in the case of a system for manufacturing products), or (iii) misuse of a system (such as in the case of attempted invasion or deviation of functionalities). Anomalies can be represented through various types of data, and in this study, our research interest is focused on networks. The use of networks has gained prominence over the past decades for its ability to relate social, physical, biological, economic phenomena, etc. relatively simply. Networks can be mathematically represented through graphs in which a set of nodes relates through edges, creating information in a network. The nodes represent elements that are part of a certain set \(V\) and the edges are the simple representation of the connections between the elements of \(V\), defined as a set of edges \(E\). When we combine the elements of \(V\) and \(E\), we have as a result a graph \(G\). Thus, a given graph \(G\) is defined by \(G=(V_{i},E_{j}),\;\forall\,i=(1,2,\ldots,n)\) and \(j=(1,2,\ldots,m)\). This study aims to investigate the ability to detect anomalies in global financial markets through Graph Autoencoder (GAE), that is, an artificial neural network that receives input data through an encoder responsible for building a compact latent representation of the graph \(G\) and a decoder that reconstructs the original graph from the compact latent model. To achieve this goal we used non-extensive entropy to measure the level of uncertainty in global financial markets before, during, and after a crisis period. The paper is organized as follows: section 2 presents the literature review conducted to identify gaps, state-of-the-art methods, and potentialities; Section 3 presents the methodology and data used, and describes the models and methodological strategies selected for evaluation; Section 4 presents the main results and discussions; and, finally, section 5 presents the conclusion and ideas for future research. ## 2 Background and Related Works The applications of GNN for anomaly detection in time series can be obtained through telemetry data, that is, data for monitoring the status of a particular system. The work proposed by Xie et al. (2021) analyze anomaly detection in satellite systems with an accuracy of 98%, demonstrating a high level of efficiency and robustness. Additionally, works such as Li and Jung (2021) present graph-based strategies for detecting anomalies based on spurious relationships. The work of Ahmed et al. (2017) apply a set of anomaly detection methods to the Australian Security Exchange (ASX), and the experimental results suggest that the LOF (Local Outlier Factor) and CMGOS (Clustering-based Multivariate Gaussian Outlier Score) are the best-performing anomaly detection techniques. A graph neural network (GNN) refers to any artificial neural network that receives graph data as input Wu et al. (2022) and this approach is effectively applied in anomaly detection task Wang and Yu (2022). According to Wang et al. (2022), the challenges for applying GNN to financial data are the need for more transparent results (explainability), diversification in the type of task where GNN are applied considering that node-level tasks have greater prominence than edge-level and graph-level tasks, the construction of benchmark datasets that enable the reproduction of generated results, and, finally, the need for GNN-based systems for financial applications to have a scalable infrastructure (enabling applications for real-world problems). Tsallis entropy is used in studies that evaluate financial markets. The study proposed by Wang and Shang (2018) presents an approach called multiscale cross-distribution entropy based on Tsallis entropy, and the results indicate better performance in providing information about the relationship between two assets than the multiscale cross-sample entropy method Yin and Shang (2014). The aim of this study is to contribute to the literature on using of GNN for anomaly detection in global financial markets using Tsallis entropy as an anomaly score. This section presents the methodology applied in this work. ### Network Structure of Global Financial Markets #### 2.1.1 Global Financial Market Graph The global financial market is formed by the relationships between assets that belong to each local financial market. In turn, each local financial market is a set of assets that are traded between companies seeking capital raising and investors. We assume that any asset that belongs to a particular local financial market is also part of the global financial market. The graph notation of a global financial market is \[\mathcal{G}=(\mathcal{K}_{i},\mathcal{W}_{j}),\forall i=(1,\dots,k)\ \wedge\ j=(1,\dots,w) \tag{1}\] where \(\mathcal{G}\) is the global financial market, \(\mathcal{K}\) is a set of all assets with \(k\in\mathcal{K}\) represent the total number of assets in \(\mathcal{G}\), \(\mathcal{W}_{j}\) is a set of all asset edges with \(w\in\mathcal{W}\) represent the total number of edges in \(\mathcal{G}\). #### 2.1.2 Adjacency Matrix by Correlation Coefficients A data structure used in graph theory is the adjacency matrix. The adjacency matrix (Eq. 2) is an \(n\times n\) matrix, where \(n\) is the number of nodes in the graph, and each entry \(a_{ij}\) shows whether there is an edge between two nodes or not. \[A_{nn}=\left|\begin{array}{ccc}a_{11}&\dots&a_{1n}\\ \vdots&\ddots&\vdots\\ a_{n1}&\dots&a_{nn}\end{array}\right|\qquad a_{ij}=\left\{\begin{array}{ll}0 &if\frac{\pi}{2}\left(u,v\right)\\ 1&if\exists\left(u,v\right)\end{array}\right. \tag{2}\] The correlation matrix between the assets in \(\mathcal{K}\) is a complete graph \(\mathcal{G}\), where each element of \(\mathcal{W}\) is a correlation. Thus, the correlation between two assets \(u\) and \(v\) represents the average of the products of their standardized values. \[cor(u,v)=\frac{cov(u,v)}{std(u)\cdot std(v)} \tag{3}\] where the covariance \((u,v)\) is the mean product of centered values, that is \[cov(u,v)=\frac{\sum_{i=1}^{p}(u_{i}-\bar{u})(v_{i}-\bar{v})}{p} \tag{4}\] Thus, the correlation matrix for the assets that belong to the global financial market was used to construct \(\mathcal{G}\). The matrix \(S_{kk}\) is a \(k\times k\) matrix, for all \(k\in\mathcal{K}\), where each entry \(a_{kk}\) is defined by a threshold value (\(\tau\)). \[S_{kk}=\left|\begin{array}{ccc}s_{11}&\dots&s_{1k}\\ \vdots&\ddots&\vdots\\ s_{k1}&\dots&s_{kk}\end{array}\right|\qquad s_{kk}=\left\{\begin{array}{ll} 0&if\ corr(u,v)<\tau\\ 1&if\ corr(u,v)\geq\tau\end{array}\right. \tag{5}\] #### 2.1.3 Estimated \(\tau\) and Minimum Spanning Tree The approach to estimating \(\tau\) was a _winner-take-all_ strategy in which only correlation coefficients above a limit are considered as edges. In this study, the 99th percentile was adopted as the limit, meaning that the final graph considers only the top 1% of correlations, resulting in the graph \(\mathcal{G}^{{}^{\prime}}\). Finally, a minimum spanning tree (MST) algorithm was applied to \(\mathcal{G}^{{}^{\prime}}\) to verify the stability degree, as proposed in Micciche et al. (2003). ### Anomaly Detection with Graph Autoencoder #### 2.2.1 Graph Autoencoder An autoencoder can be defined as a type of neural network algorithm that utilizes a compressed latent feature representation to learn and reconstruct input data. By minimizing the difference between the original input and the reconstructed output, an autoencoder is able to effectively extract and encode meaningful features from complex data sets. Figure 1 shows the architecture of a typical autoencoder, which consists of an encoder network that maps the adjacency matrix to a compressed latent feature representation, and a decoder network that maps the latent representation back to the original input space (reconstructed input). To detect anomalies in global financial markets, the autoencoder maps the adjacency matrix to a low-dimensional latent space, where anomalies correspond to data points that deviate significantly from the learned patterns of normal data. Considering a train set \(\mathcal{D}=\{x_{i}|i=1,\ldots,T\},\ \ \forall x_{i}\in\mathbb{R},\ i\in\mathbb{N}\). The encoder can be written as \[h_{i}=g(x_{i}) \tag{6}\] where the latent feature representation \(h_{i}\) is the output of the encoder function \(g\). And the decoder can be represented as \[\tilde{x}_{i}=f(g_{x_{i}}) \tag{7}\] where \(\tilde{x}_{i}\) is the output of decoder function\(f\). In turn, the training of an autoencoder is to find the functions \(f\) and \(g\) such that \[\arg\min_{f,g}<\Delta(x_{i},\tilde{x}_{i})> \tag{8}\] where \(\Delta\) is a loss function that penalize the difference between \(x_{i}\) and \(\tilde{x}_{i}\); and the operator \(<.>\) is the average over all observations. The reconstruction error (RE) is a metric to evaluate the ability of the autoencoder to reconstruct \(x_{i}\). In this study, the mean squared errors (MSE), \[RE\equiv MSE=\frac{1}{T}\sum_{i=1}^{T}|x_{i}-\tilde{x}_{i}|^{2} \tag{9}\] #### 2.2.2 Anomaly Detection Scores As described by Renyi (1961), let \(\mathcal{P}=(p_{1},p_{2},\ldots,p_{n})\) be a finite discrete probability distribution; the amount of uncertainty (concerning the output of an experiment) of \(\mathcal{P}\) is called entropy of the distribution \(\mathcal{P}\). This uncertainty (or randomness) is usually measured by Shannon entropy Shannon (1948), that is \[H(\mathcal{P})=\sum_{k=1}^{n}p_{k}log_{2}\frac{1}{p_{k}} \tag{10}\] When applied to issues related to the financial market, entropy-based tools can measure the level of randomness in the markets Delgado-Bonal (2019), analyzing and predicting the behavior of stocks Maasoumi and Racine (2002), Gu (2017), as well as pointing to a contagion effect of financial market uncertainty on other economic variables Ahn et al. (2019). In this study, Tsallis entropy Tsallis (1988) was applied to measure uncertainty in global financial markets. Tsallis entropy is defined as \[S_{q}=\frac{1-\sum_{i=1}^{W}p_{i}^{q}}{q-1} \tag{11}\] where \(p_{i}\) is the probability of finding the system in the \(i\)-th state, \(q\) is a parameter that determines the degree of non-extensivity of the entropy, and \(W\) is the number of states in the system. When \(q\to 1\), Tsallis entropy reduces to \(H[\mathcal{P}]\). Algorithm 1 performs a set of basic operations considering a matrix \(P\) with \(k\) vectors of size \(n\) as input. Assuming that \(k\leq n\), we can assume that the complexity of calculating the mean and standard deviation is \(O(kn)\). Computing the covariance matrix involves three steps: (i) calculating the mean of each variable, \(O(kn)\); (ii) centering the data by subtracting the mean of each variable, \(O(kn)\); and (iii) multiplying the centered matrix and its transpose, \(O(k^{2}n)\). The calculation of the correlation matrix can be performed in \(O(k^{2})\). Similarly, creating a graph from a list of edges takes time \(O(k^{2})\). Thus, the algorithm for constructing an undirected graph for asset returns is \(O(k^{2}n)\), i.e., the execution time grows linearly with the input size \(n\) and quadratically with the number of variables \(k\). ``` 1:\(means=\) mean of each vector of \(P\) 2:\(m=\) size of mean 3:\(stds=\) standard deviation of \(P\) 4:\(P_{cov}=\) compute the covariance matrix of \(P\) 5:\(P_{corr}=\) create an empty matrix \(m\times m\) 6:for\(i=0\)to\(n\)do 7:for\(j=0\)to\(m\)do 8:\(P_{corr}[i,j]=\frac{P_{cov}[i,j]}{stds[i]\times stds[j]}\) 9:\(edges=\) stack the correlation matrix \(P_{corr}\) 10:\(edges\_w.self=\) remove the self-correlations 11:\(asset\_graph=\) create a graph with \(edges\_w.self\) 12:endfor 13:endfor 14:return\(asset\_graph\) ``` **Algorithm 1** Undirected graph for asset returns Algorithm 2 is a simplified representation of the process of detecting anomalies in global financial markets using the autoencoder architecture and Tsallis entropy as an anomaly score. Disregarding the running time for training the autoencoder, the algorithm performs a set of basic operations for each node in the global financial markets. Thus, we can estimate that the anomaly detection algorithm is \(O(n)\). Figure 1: Autoencoder architecture ## 3 Results and Discussion This study uses the financial asset return dataset developed in Koshiyama et al. (2022). After a preliminary data cleaning to remove observations with different periods, the analysis considered a dataset with 1847 observations for 802 stocks. The data covers the period from 2004-10-27 to 2019-03-15. The results were analyzed considering the global financial crisis (GFC) between 2007 and 2010. Thus, we follow three periods: the period before the crisis is from 2004-10-27 to 2007-06-27, the period during the crisis is from 2007-06-29 to 2010-08-24, and the period after the crisis is from 2010-08-25 to 2019-03-15. Figure 2 shows the network for the global financial market before the crisis. It can be observed from Table 1 that in this period, 455 edges were observed in the top 1% of correlations. In this scenario, approximately 40% of the stocks did not have edges. Analyzing the degree of centrality of each stock, it is possible to observe that the maximum value for the centrality degree is 60, with a distribution showing that most of the stocks are concentrated in the range between 0 and 40. Figure 4 presents the network for the global financial market during the crisis. In this period, 294 edges were observed in the top 1% of correlations. In this scenario, approximately 64% of the stocks did not have edges. Analyzing the degree of centrality of each stock, it is possible to observe that the maximum value for the degree of centrality is approximately 168, with a distribution showing that most stocks are concentrated close to Figure 4: _During Crisis_ - Association network and community structure in the high-correlated global financial market. The figure shows the global financial market correlation-based network with a threshold that selects only the high links weighted by correlations and runs in an MST algorithm. Figure 5: The rank of degree and distribution of degree centrality during crisis network Figure 3: The rank of degree and distribution of degree centrality before crisis network Figure 2: _Before Crisis_ - Association network and community structure in the high-correlated global financial market. The figure shows the global financial market correlation-based network with a threshold that selects only the high links weighted by correlations and runs in an MST algorithm. Figure 6 presents the network for the global financial market after the crisis. In this period, 488 edges were observed in the top 1% of correlations. In this scenario, approximately 64% of the stocks did not have edges. Analyzing the degree of centrality of each stock, it is possible to observe that the maximum value for the degree of centrality is 95, with a distribution showing that most stocks are concentrated in the range between 0 and 25. Through the autoencoder approach described in the methodology of this work, a convolutional neural network was trained with the task of label classification from a latent feature considering the correlation matrix for a set of countries that make up the selected financial markets. As observed in Table 1, the clustering coefficient shows that the tendency of data clustering decreases during the crisis period and increases after the crisis, following the same pattern observed through the number of edges for the top 1% of correlations between assets. In other words, the results suggest that the financial markets become more interconnected and clustered after the crisis, which is consistent with the idea of increased global financial integration. Additionally, the analysis of the degree of centrality of each stock reveals that there are some stocks with high centrality that play an important role in the network, and that the distribution of centrality values is skewed towards lower values, indicating that most stocks are not highly connected. The Figure 8 presents the outcomes obtained by applying the loss function during the training process of the GNN. The results indicate that the model was successful in reducing its error for the training set in all three periods analyzed. This suggests that the GNN was capable of learning and improving its predictions over time. Furthermore, it are observed that the GNN had greater difficulty reconstructing the adjacency matrix before and after the crisis periods, in comparison to during the crisis period. This finding suggests that the model may have struggled to capture the underlying patterns and dynamics of the network during the pre- and post-crisis periods, which could have led to less accurate predictions of the network's behavior. Figure 9 shows that, in the before-crisis period, the number of detected anomalies decreased as the value of \(q\) increased. The same behavior is observed in the after-crisis period. However, the results indicate that, during the crisis period, there is stability in the number of detected anomalies, which becomes higher than the number observed in the other periods. This finding suggests that the parameter \(q\) does not influence anomaly detection during the crisis period. To test the null hypothesis that the anomaly detection is different among the periods, a t-test was applied to compare the mean between two groups. \begin{table} \begin{tabular}{l c c c} \hline & **Before** & **During** & **After** \\ \hline Number of Edges & 455 & 294 & 488 \\ Nodes Without Edges & 39.52\% & 59.97\% & 37.53\% \\ Max Degree & 60 & 168 & 95 \\ Mean Degree & 6.14 & 18.38 & 8.49 \\ Std Degree & 9.43 & 38.36 & 14.70 \\ Clustering Coeff. & 0.30 & 0.26 & 0.37 \\ \hline \end{tabular} \end{table} Table 1: Summary of results for before, during, and after crisis networks Figure 8: Loss function and reconstruction error of graph neural network training Figure 6: _After Crisis_ - Association network and community structure in the high-correlated global financial market. The figure shows the global financial market correlation-based network with a threshold that selects only the high links weighted by correlations and runs in an MST algorithm. Figure 7: The rank of degree and distribution of degree centrality after crisis network Table 2 shows the results of the t-test applied to the number of anomalies detected by Tsallis entropy for positive and negative values of \(q\) (ranging from -0.5 to 0.5). Based on the analysis of the p-value, it is possible to reject the null hypothesis of equality between the means. Therefore, the results generated by the anomaly detection in the three periods are statistically different. ## 4 Conclusion The detection of anomalies in financial markets is an important task for different economic agents. Equity fund managers can use this information to improve their decision-making process and, eventually, improve their results. Central banks can use anomaly detection to identify dysfunctions in markets and act to mitigate the harmful effects of an economic crisis originating from the financial system, thus fulfilling their role as regulators and stabilizers of price levels, employment, and output. The objective of this article was to investigate the use of graph neural networks for anomaly detection in a graph constructed from the correlation matrix for the stocks that make up different local financial markets, forming a representative graph of the global financial market. The results indicate a trade-off between the correlations observed during the crisis period: the correlation between all assets increases, but when we consider only the stocks with more connections (top 1%), the correlation decreases. Thus, we indicate that during a crisis period, the graph for the global financial market becomes more sparse. The limitations of the paper are primarily focused on the fact that the anomalies selected by the scores cannot be validated through a ground truth dataset. Therefore, in future studies, it is important to use a supervised learning strategy to calculate metrics that establish the accuracy of the model based on labeled anomalies. ## 5 Acknowledgments The author thanks Simone D.J. Barbosa (DI/PUC-Rio) and Bernando Modenesi (University of Michigan) for their comments and suggestions. Any errors and inaccuracies are the responsibility of the author.
2307.00986
Designing impact-resistant bio-inspired low-porosity structures using neural networks
Biological structural designs in nature, like hoof walls, horns, and antlers, can be used as inspiration for generating structures with excellent mechanical properties. A common theme in these designs is the small percent porosity in the structure ranging from 1 - 5\%. In this work, the sheep horn was used as an inspiration due to its higher toughness when loaded in the radial direction compared to the longitudinal direction. Under dynamic transverse compression, we investigated the structure-property relations in low porosity structures characterized by their two-dimensional (2D) cross-sections. A diverse design space was created by combining polygonal tubules with different numbers of sides placed on a grid with varying numbers of rows and columns. The volume fraction and the orientation angle of the tubules were also varied. The finite element (FE) method was used with a rate-dependent elastoplastic material model to generate the stress-strain curves under plane strain conditions. A gated recurrent unit (GRU) model was trained to predict the structures' stress-strain response and energy absorption under different strain rates and applied strains. The parameter-based model uses eight discrete parameters to characterize the design space and as inputs to the model. The trained GRU model can efficiently predict the response of a new design in as little as 0.16 ms and allows rapid performance evaluation of 128000 designs in the design space. The GRU predictions identified high-performance structures, and four design trends that affect the specific energy absorption were extracted and discussed.
Shashank Kushwaha, Junyan He, Diab Abueidda, Iwona Jasiuk
2023-07-03T13:05:46Z
http://arxiv.org/abs/2307.00986v2
# Designing impact-resistant bio-inspired low-porosity structures using neural networks ###### Abstract Biological structural designs in nature, like hoof walls, horns, and antlers, can be used as inspiration for generating structures with excellent mechanical properties. A common theme in these designs is the small percent porosity in the structure ranging from 1 - 5%. In this work, the sheep horn was used as an inspiration due to its higher toughness when loaded in the radial direction compared to the longitudinal direction. Under dynamic transverse compression, we investigated the structure-property relations in low porosity structures characterized by their two-dimensional (2D) cross-sections. A diverse design space was created by combining polygonal tubules with different numbers of sides placed on a grid with varying numbers of rows and columns. The volume fraction and the orientation angle of the tubules were also varied. The finite element (FE) method was used with a rate-dependent elastoplastic material model to generate the stress-strain curves under plane strain conditions. A gated recurrent unit (GRU) model was trained to predict the structures' stress-strain response and energy absorption under different strain rates and applied strains. The parameter-based model uses eight discrete parameters to characterize the design space and as inputs to the model. The trained GRU model can efficiently predict the response of a new design in as little as 0.16 ms and allows rapid performance evaluation of 128000 designs in the design space. The GRU predictions identified high-performance structures, and four design trends that affect the specific energy absorption were extracted and discussed. keywords: Bio-Inspired, Structure-property relations, Neural networks, Specific energy absorption + Footnote †: journal: Journal of Computational Physics ## 1 Introduction Lightweight structures with high energy absorption capacity are of high interest for multiple engineering applications. Various structural elements found in animals and plants could be used as inspiration to design novel structures that can sustain impacts generated during collision [1; 2; 3]. The process of evolution has created complex architectures in nature capable of handling low-to-medium velocity impacts (up to 50 m/s). An example is the trabecular-honeycomb biomimetic structure inspired by beetle elytra [4]. Rams see impact velocities of around 5.5 m/s when fighting. Also, during collisions, sheep horns can withstand a maximum impact force of 3400N [5]. The sheep microstructure has evolved to sustain large dynamic forces without catastrophic failure [6]. Similarly, the hoof sustains high impact loading forces close to 9000 N while galloping [7]. The tubular structure is a common feature in equine hoofs and horns [8; 9]. Such structure contains arrays of long aligned tubules within the bulk material, promoting energy absorption. Biological materials and structures often exhibit excellent energy absorption capabilities and inspire the design of new energy absorbers. Bio-inspired structures have been used in countless applications, including automobiles [10], protective armors [11], and wings of aircraft [12]. Further, a variety of materials have been used to manufacture bio-inspired structures, including polymers [13], aluminum alloy [10], fiber-reinforced composites [14], and concrete [15]. Hence, studying the structure-property relations of bio-inspired designs is of great research and industrial interest. The exploration of structure-property relations involves surveying many different structural features at a given loading condition. Various studies utilized optimization-based methods to generate new designs for energy absorption and study the structure-property relations [16; 17; 18; 19; 20]. However, a systematic compilation of bio-inspired designs' mechanical response and energy absorption characteristics is lacking. In previous studies, the response of the hoof and horn-inspired structures was studied at quasi-static loading [21]. Various types of designs, including but not limited to composite laminates [21; 22] and tubular honeycomb structures [23], have been tested using experiments and finite element analyses (FEA) [24]. The primary objective of these evaluations was to obtain greater energy absorption or damage tolerance through crack deflection. Further, it was shown by Sabet et al. [25] that the geometrical arrangement of stiff and soft phases can significantly influence the overall properties of the composite structure. Within the solid mechanics domain, neural network (NN) models have been extensively used to predict stress-strain response of composites [26; 27; 28], metals [29; 30; 31], and lattices [32; 33; 34]. However, the use of NN models for studying bio-inspired structures remains scarce. Existing studies have utilized GANs to design porous structures using X-ray microtomography images as input [35]. Apart from GANs, bio-inspired structures have been designed using a conditional variational autoencoder [36]. In most cases, either a specific property [37] is predicted, or in an unsupervised deep learning method, images or parameters of the structure are predicted [38]. Previous works did not focus on predicting the full-field temporal distribution of the stress field during the impact. Thus, the prediction of stress fields as a function of time is the first objective of this study. Further, this paper aims to develop a systematic framework to generate structures that combine different design elements found in low-porosity structures in nature, i.e., the study of the structures with aligned tubules whose porosity is in the range of 1% - 5% under transverse dynamic compression. The framework generates low-porosity structures with constant cross-sections along the thickness direction by randomly combining various design features such as tubule shape, orientation, and in-plane arrangement. Once trained, the NN can efficiently predict the mechanical performance of new designs at a rate much faster than classical numerical simulations, thus allowing rapid preliminary design selection and trend identification. Therefore, the second objective of this work is to develop a neural network (NN) model to approximate the structure-property relations, linking the input design parameters with loading conditions and the mechanical performance of the structure. Structure-property maps of the design space at different loading rates are identified, and design trends are discussed. This paper is organized as follows. Section 2 presents an overview of the numerical simulations, the input data preprocessing, and the NN model's architecture. Section 3 includes the results obtained from the study and explores the quality of NN predictions and the validity of the results. Section 4 summarizes the outcomes and lists some possible future directions for the bio-inspired structures. ## 2 Methods ### Geometry generation and Finite element analysis The designs considered in this work are 3D structures containing tubules with a constant cross-section. Hence, the designs can be uniquely characterized by their 2D, in-plane cross-sections, assuming the plane strain condition. A Python script was developed to generate cross-sectional sketches in the finite element (FE) analysis package Abaqus [39] for a given volume fraction, tubule shape, tubule orientation, and the arrangement of the tubules within the structure. The cross-section of the bio-inspired structures studied in this work is an 11-by-11 mm\({}^{2}\) square, whereas all the tubules are confined within a concentric square area of 10-by-10 mm\({}^{2}\). The tubule volume fraction was uniformly sampled from the range [1%, 10%]. In this work, we approximated the tubule cross-sections by polygons of a different number of sides that were uniformly sampled from the range [3; 6] i.e., included triangles, squares, pentagons, and hexagons. Additionally, rotation was applied to the cross-sections, and the rotation angle was uniformly sampled from the range [0, 360] degrees. Multiple tubules can be present in the structure, and we placed them on a \(n_{y}\times n_{x}\) grid, where \(n_{y}\) and \(n_{x}\) denote the number of rows and columns, respectively. However, all the tubules in a given configuration have the same shape, and the designs with non-intersecting tubules were considered valid. Other designs were excluded from the analysis.The \(n_{y}\) and \(n_{x}\) were sampled in the range [1; 8]. Some selected structures in the design space are shown in Fig. 1. All the structures were discretized with 4-node bilinear plane strain quadrilateral elements with reduced integration. A nominal element edge length of 0.24 mm was chosen for meshing. The relationship between different structural designs and energy absorption mechanisms seen in bones, teeth, and horns is discussed by McKittrick et al.[40]. Further, they discuss that when rams butt heads, the horns are loaded in the transverse direction, which provides more energy absorption than in the longitudinal direction. The Abaqus/Explicit dynamic simulation defined a rate-dependent elastic-plastic material model to capture the structures' response at varying strain rates. The strain rate decomposition is given by [39]: \[d\mathbf{\epsilon}=d\mathbf{\epsilon}^{el}+d\mathbf{\epsilon}^{pl} \tag{1}\] Using the definition of corotational measures, the integrated form is given by [39]: \[\mathbf{\epsilon}=\mathbf{\epsilon}^{el}+\mathbf{\epsilon}^{pl} \tag{2}\] The elasticity is linear and isotropic defined using Young's modulus, E, and Poisson's ratio, \(\nu\). The flow rule is [39]: \[d\mathbf{e}=d\bar{e}^{\mathbf{pl}}\mathbf{n} \tag{3}\] where \[\mathbf{n}=\frac{3}{2}\frac{\mathbf{S}}{q} \tag{4}\] \[q=\sqrt{\frac{3}{2}\mathbf{S}:\mathbf{S}} \tag{5}\] and \(d\bar{e}^{\mathbf{pl}}\) is the (scalar) equivalent plastic strain rate. The plasticity required that the material satisfy a uniaxial-stress plastic-strain-rate relationship. In case of rate dependence, the uniaxial flow rate is defined as follows [39]: \[\dot{\mathbf{\bar{e}}}^{\mathbf{pl}}=\mathbf{h}(q,\bar{e}^{\mathbf{pl}},\theta) \tag{6}\] where \(\bar{e}^{\mathbf{pl}}\) is the equivalent plastic strain, \(\theta\) is the temperature, and \(\mathbf{h}\) is a known function. The overstress power law model in the rate-dependent material model is defined as follows [39]: \[\dot{\mathbf{\bar{e}}}^{\mathbf{pl}}=D\left(\frac{q}{\sigma_{0}}-1\right)^{n} \tag{7}\] where \(D(\theta)\) and \(n(\theta)\) are user defined temperature-dependent material parameters and \(\sigma_{0}(\bar{e}^{pl},\theta)\) is the static yield stress. Integrating Eq. (7) by the backward Euler method gives: \[\Delta\bar{\mathbf{\bar{e}}}^{pl}=\Delta t\mathbf{h}(q,\bar{e}^{\mathbf{pl}},\theta) \tag{8}\] Eq. (8) can be inverted to obtain \(q\) as a function of \(\bar{e}^{pl}\) at the end of the increment. Hence, the uniaxial form is given by [39]: \[q=\bar{\sigma}(\bar{e}^{pl}) \tag{9}\] Figure 1: Sample structures in the design space where \(\bar{\sigma}\) is obtained by inverting Eq. (8). Equations (1) to (9) are used to define material behavior. At every increment when the plastic flow is occurring, these equations are integrated and solved for the state at the end of the increment. The material properties of the base material chosen for the study are similar to polycarbonate-acrylonitrile butadiene styrene (PC-ABS). The Young's modulus and Poisson's ratio are 2.5 GPa and 0.35, respectively. The strain rate dependent yield stress versus plastic strain curves used to define the plastic region are included in Fig. 2. However, the strains to failure are tremendous in horns, as much as 80% [40; 41]. The structures considered in this study have low porosity. At large nominal strains, most of the porosities would already be compressed. Consequently, the stress response primarily arises from the material's densification. This perspective is further reinforced by the absence of damage modeling in our study. Additionally, conducting FE simulations up to high nominal strain would demand considerably more time for input data generation for the neural network. Hence, the maximum nominal strain considered is 25%. In this study, the boundary conditions for impact loading were approximated by sandwiching the structure between two rigid plates, and the structures were subjected to dynamic transverse compression. The bottom plate was held fixed, and the top plate traveled downward with a constant velocity determined by the user-defined strain rate. The nominal strain rate was uniformly sampled from the range [0.45, 90.9] s\({}^{-1}\) corresponding to indenter velocity from the range [5, 1000] m/s. The reaction force and displacement were measured at the top rigid plate. All sidewalls were traction-free and were free to deform. All simulations had a constant final displacement of 2.25 mm, corresponding to 25% nominal compressive strain along the y-axis. The reaction force and displacement at the top plate, plastic dissipation, and elastic strain energy of the porous structures were outputs of the FE simulations. Fig. 3 depicts the FE model assembly and a typical deformed structure at the end of dynamic compression. A total of 7196 simulations were conducted on an Figure 2: Yield Stress versus Plastic Strain at different strain rates AMD Ryzen 7 5800H processor with 8 cores. Depending on the applied impact velocity, each simulation took about 5-30 minutes to complete. ### Neural network for sequence prediction #### 2.2.1 Input data, data augmentation, and loss function The input parameter range is described in Section 2.1. The corresponding output arrays were obtained from the impact simulations conducted in Abaqus/Explicit. The output arrays were downsampled to 50-time steps for the efficiency of neural network training. The inputs used in the model consist of eight temporal information arrays. The first five arrays are constant in time and correspond to the parameters used to define the structure's geometry. The parameters include \(n\): the topology of the tubule (i.e., number of sides in a polygon), \(n_{x}\): number of tubules evenly distributed in the x-direction, \(n_{y}\): number of tubules evenly distributed in the y-direction, \(a_{o}\): rotation angle for all the tubules in the structure, and \(v_{f}\): volume fraction of the individual tubule in each element created by \(n_{x}\) times \(n_{y}\) elements in a 10-by-10 mm\({}^{2}\) grid. The remaining three inputs are physics-informed temporal arrays described as follows: 1. Current time value at each output time point. 2. Nominal compression strain at each output time point. 3. Nominal compression strain rate. A standard scaler in Scikit-Learn normalized all the inputs [42] before training. The scaler was fitted only to the training data points to avoid information leakage [26]. The available training data was increased using data augmentation. Corresponding to each simulation conducted in Abaqus with 25% final nominal strain, twenty final nominal strains in the range [10%,25%] were randomly sampled, and all inputs and outputs were linearly interpolated to the selected final strain level. Figure 3: FE model setup and results: (a) Typical structure with two rigid plates for dynamic transverse compression. (b) A typical deformed structure showing von Mises stress. This method generated training data points at the same strain rate but different final nominal strain, and increased the total number of input data points from 7196 to 719600. These data points were divided into training (65%), validation (15%), and testing datasets (20%). The mean absolute error (MAE) has been employed as the loss function in this study [43]. The loss function is defined as: \[\mathrm{MAE}=\frac{\sum_{i=1}^{N}|\mathbf{Y_{i}}-\hat{\mathbf{Y}_{i}}|}{N}, \tag{10}\] where \(N,\mathbf{Y_{i}},\hat{\mathbf{Y_{i}}}\) denote the number of training data points, ground-truth outputs, and the NN predictions, respectively. The mean squared error (MSE) is chosen as a metric, which is defined as: \[\mathrm{MSE}=\frac{\sum_{i=1}^{N}(\mathbf{Y_{i}}-\hat{\mathbf{Y}_{i}})^{2}}{N}. \tag{11}\] #### 2.2.2 Neural network model This study uses a recurrent neural network (RNN) model to train the forward model for output prediction. Specifically, the gated recurrent unit (GRU) model is used. This model has been widely used to predict sequences [44; 45; 46; 47]. Further, Abueidda et al. [45] compared the performance of different RNN models to predict the response of elastoplastic material undergoing deformation under variable strain rates. Although the GRU model is more computationally expensive than the long short-term memory (LSTM) model and the temporal convolutional network (TCN) model, it predicts the output with lower error. Based on the GRU model's demonstrated capabilities to predict the structures' response under complex deformation histories, this study used the model to predict stress-strain curves for the structures under dynamic transverse compression. The GRU-based model was implemented and tested in Keras [48] with a TensorFlow [49] backend. The GRU model comprises three stacked layers of 475 GRU units, each with hyperbolic tangent (tanh) activation leading to a model with 3.77 million trainable parameters. The NN architecture is presented in Fig. 4. The loss function was minimized using an Adam optimizer [50] with an initial learning rate of 1\(\times\)10\({}^{-3}\). The model was trained for 150 epochs with a batch size of 600, and training was repeated 10 times to obtain average training time and model accuracy. The data set was shuffled and partitioned in each training repetition, as described in Section 2.2.1. All training was conducted on Google Colab Pro+ using GPU acceleration on Tesla V100 GPU. Figure 4: Neural network architecture ### Global optimization Using the trained neural network, a Python script was developed to traverse the input design space and evaluate the energy absorption performance. The input design space was divided into grid points based on the first five input parameters described in Section 2.2.1. Each grid point represents a unique structure within the input design space based on five input parameters. The specific energy absorption (SEA) was computed for each grid point by calculating the area under the load-displacement curve (calculated from the GRU model predictions). Three design parameters: number of sides of the polygon, \(n_{x}\) and \(n_{y}\) could take discrete integer values within their respective input range, whereas volume fraction and angle offset were divided into 40 and 20 equally spaced intervals, respectively. Hence, this method was used to analyze the SEA for 128000 structures within the input design space. This process was repeated for five different values of the indenter velocity (\(v_{y}\)) within the range described in Section 2. A similar process can be repeated at different equally spaced intervals to obtain the performance of all the structures in the input design space for a given final strain and the indenter velocity. ## 3 Results and discussion ### Validation of the neural network predictions The best and the worst designs (as predicted by the trained GRU model) at two different impact velocities (10 and 100 \(m/s\)) were validated by FE simulations to check the accuracy of the GRU model predictions. Fig. 5 shows the best and the worst designs at two different indenter velocities, specifically 10m/s and 100m/s. FE simulations were conducted to obtain the ground-truth values of SEA under an applied plate velocity of 10 \(m/s\) (cases (a) and (b)) and 100 \(m/s\) (cases (c) and (d)) and a final axial strain of 0.25. The comparison of the FE-simulated and GRU-predicted SEA values are shown in Fig. 6. As can be seen from the results, the trained GRU is highly accurate for the two impact velocities tested, and the predicted SEA values fall within 5% of their respective ground truth values. This result provides confidence in applying the trained model for further inference tasks. ### Predicting stress-strain curves and energy outputs The number of input data points used in training was decided based on the prediction accuracy measured using the value of the loss function. In this study, the percentage of total input data was incremented to train the neural network model until similar prediction accuracy was observed. Further, the average response of the GRU model was measured by training the model 10 times after shuffling the data before each training iteration. The loss function value corresponding to the increasing amount of training data is shown in Fig. 6(a). Further, a typical training history is also presented in Fig. 6(b). The average training and inference times for the GRU model and the average FE simulation time are reported in Table 1. \begin{table} \begin{tabular}{c|c c c} & GRU training & GRU inference & FE simulation \\ \hline Time & 5192.9s & 1.63\(\times 10^{-4}\)s & 5-30 mins [1] \\ \end{tabular} \end{table} Table 1: Computational cost for GRU training, inference, and FE simulations After training the NN, the NN predictions were compared to the ground truths obtained from FE simulations, ranked by the percentile of MAE for each output array. The model with median MAE Figure 5: Highest and lowest SEA designs as predicted by the trained GRU model: (a) highest SEA, 10 m/s, (b) lowest SEA, 10 m/s, (c) highest SEA, 100 m/s, (d) lowest SEA, 100 m/s. Figure 6: Comparison of the FE-simulated and GRU-predicted SEA values for the four validation design cases. Figure 7: Convergence plot for GRU model training process: (a) Scaled mean squared error when a different percentage of the total data is used in training. (b) Scaled mean absolute error evolution during training. Note that the MAE shown here is the MAE computed on the variables scaled by the standard scaler. among the 10 training repetitions was used to generate the plots shown in Fig. 8. The final MAE for this model is 6.07\(\times 10^{-3}\). The amount of data required for training was chosen by checking the loss function value for different percentages of input data. Fig. 7a shows that the loss increases as the percentage of the input data is decreased compared to the reference (80% data). Hence, we chose 80% data as input for training. Further, it could be inferred from Fig. 7b that no major overfitting has occurred. The statistical distribution of MAEs is shown in Fig. 8. From the first three columns, up to 75% percentile, we could see that the GRU model can closely predict the FE simulation results for stress-strain curves, plastic dissipation, and elastic strain energy. Even in the worst case, the GRU model correctly predicts the general shape of the FE-simulated stress-strain curve. In the current study, the cross-section image of the structure has been parameterized using five design variables. These variables are then used as inputs in the GRU model. Another valid approach is to encode the cross-sectional images of the design via an autoencoder before training the GRU model. This approach was used in the work of He et al. [44] for exploring the structure-property relations of thin-walled lattices. However, training the autoencoder can take additional computational resources and is unnecessary when discrete parameter values can parameterize the current design space. Hence in this work, we used the design parameters to describe the designs instead of the autoencoder. However, judging from the comparison with FE data shown in Fig. 8, Figure 8: Comparison of ground truths and GRU predictions for the data set, ranked by percentile of MAE to provide a representative sampling. Here, MAE is ranked independently for each of the four output arrays. the prediction accuracy is high even with this simplified approach. The scatter plots connecting the input data and the corresponding prediction error, along with linear curve fits, were utilized to identify potential correlations. It was observed that, in general, there is no discernible pattern relating the input parameters to the prediction errors, as evidenced by the low \(R^{2}\) fitting values. However, a concentration of cases with prediction errors greater than 2% is observed in Fig. 9. These cases are concentrated in hexagonal vacancies (shape=6) with one column (\(n_{x}\)=1) and five rows (\(n_{y}\)=5), a void volume fraction of 4.1%, an initial rotation angle of 0.82 radians, and a strain rate of 72.22 \(s^{-1}\). However, those cases only constitute 0.0062% of the total prediction cases. ### Structure-SEA map The Python script described in Section 2.3 was used to calculate SEA from the stress-strain curve predicted by the NN at each design point. Each structure could be represented by a unique design index defined using the first five input parameters to the NN as described in Section 2.2.1. Finally, the scatter plots for SEA at each design surveyed in the grid search for two different impact velocities are plotted in Fig. 10 and Fig. 11, which show a structure-property map for this chosen design space. Using the scatter plot shown in Fig. 10, we could identify the best and worst designs regarding specific energy absorption within the input design space for the given loading condition and final strain. These two points are also highlighted in the Fig. 10. Further, the same Python code described in Section 2.3 could be used to plot the SEA for structures with various constraints. For example, Fig. 12 shows the distribution of SEA for structures with a volume fraction of porosity between 4.5% and 5%. Figure 9: MAE vs input test data parameters (geometric parameters and loading condition). ### Design trends and observations The structure-energy absorption maps shown in Section 3.3 are useful for obtaining an overview of the entire design space. However, additional design insights could be drawn from the map to guide future design work: 1. At the same volume fraction of porosity within the structure, final strain, and indenter velocity exceeding 100 m/s, arranging the pores vertically results in optimal energy absorption. By contrast, the lowest energy absorption is achieved when the porosity is concentrated at the center. The structure illustrated in Fig. 13 emerged as the most efficient design for SEA, according to the SEA map depicted in Fig. 10. Conversely, the structure in Fig. 14 demonstrated the lowest SEA. The structure with maximum SEA (Fig. 13) has a porosity of close to 1%, whereas the one with minimum SEA (Fig. 14) has a porosity closer to 5%. In both instances, we observe a higher stress band that originates at the structure's corners and radiates toward its center during compression. In essence, the presence of material in areas of high Figure 11: Structure-property relations at an impact velocity \(v_{y}=10m/s\) and final axial strain of 0.25. The highest and lowest SEA designs are highlighted in solid red pentagons. Figure 10: Structure-property relations at an impact velocity \(\mathrm{v_{y}}=100\mathrm{m/s}\) and final axial strain of 0.25. The highest and lowest SEA designs are highlighted in solid red pentagons. stress is crucial for achieving a higher SEA. In the case of the structure in Fig. 13, only a few pores are present within the high-stress region. On the other hand, the structure illustrated in Fig. 14 has its entire porosity at the center, resulting in diminished load-carrying capacity and a lower SEA. This behavior can be observed in Fig. 15, which illustrates two structures with the same square-shaped porosity volume fraction but different angle offsets. When subjected to similar loading conditions, Fig. 15b exhibits 4% higher energy absorption compared to Fig. 15a as validated by FE simulations. The GRU-predicted trend of how the tubule orientation angle affects the SEA is shown in Fig. 15c for square porosity. The prediction shows a sinusoidal variation, which is reasonable, as the top-down projected load-bearing area (area unaffected by porosity) varies in a sinusoidal fashion. 3. The structure with maximum and minimum SEA depends upon the volume fraction of the porosity. Further, it is also affected by the strain rate and the orientation of polygonal porosity, as shown in Fig. 5. For example, the red marks in Fig. 10, Fig. 11, and Fig. 12 show different structures (design index) with maximum and minimum SEA. 4. The Pearson correlation coefficient is calculated to assess the relationship between SEA and different geometric parameters. Both cases show a strong negative correlation between SEA and volume fraction, indicating that increasing porosity volume fraction generally leads to decreasing SEA. The correlation coefficients for angle offset are close to zero, consistent with the sinusoidal nature of the trend observed in Fig. 15. The orientation of pores was found to cause a significant variation in the SEA, with a difference close to 4%. Hence, in order to obtain correct conclusions using correlation analysis, it is necessary to employ exploratory grid search methods to identify select designs that exhibit a high SEA. The number of pores in the x-direction is negatively correlated to SEA, while the number of pores in the y-direction is positively correlated. Apart from that, a minor correlation is observed for other variables. ## 4 Conclusions and future work In this work, a parametric framework was developed to generate bio-inspired low-porosity designs with tubules of various shapes, orientations, and in-plane arrangements. The structures were made from PC-ABS with rate-dependent elastoplastic behavior. FE simulations were conducted to Figure 15: Effect of orientation on energy absorption under transverse compression for constant volume fraction: (a) Structure absorbing less energy. (b) Structure absorbing more energy (c) Predicted trend as the angle offset is varied. obtain the stress-strain curves of the structures at different impact velocities during transverse loading. Using the FE simulation data, a GRU model was trained to predict the stress-strain curve for low-porosity bio-inspired structures under dynamic transverse compression loading. Data augmentation techniques were implemented to reduce the number of simulations required using Abaqus. The trained NN model could make accurate predictions (MAE: 6.07 \(\times 10^{-3}\)) for SEA of all the structures across a range of final strains and strain rates. Further, the trained neural network was used to survey the entire design space with 128000 structures at each strain rate. Overall, the trained model NN was able to generate all the performance predictions extremely efficiently, even on low-end laptops. The stress-strain response for each structure could be predicted in 0.16ms. Hence, it renders itself a suitable guide in preliminary design stages to quickly survey designs for more detailed analyses. Using the predictions of the trained NN, key observations were made and summarized below: * The SEA maps generated using grid search based on geometric variables facilitated the identification of several design trends obtained from the trained NN model. * Our study delved deep into the influences of porosity arrangement, volume fraction, strain rate, and orientation on the SEA. Two standout findings were: * Varying the orientations of pores can result in approximately 4% difference in SEA. * Vertical arrangement of pores at the same volume fraction led to greater SEA. * The SEA maps, beyond their immediate application, contribute to understanding the effect of geometric parameters on SEA under varying loading conditions. * The Pearson correlation analysis augmented the study by drawing connections between different geometric parameters and the SEA. Figure 16: Pearson correlation coefficient between SEA and geometric variables at two different indenter velocities. The results indicated a strong negative correlation between SEA and porosity volume fraction, while minor correlations were observed for other variables. * The minor correlation between the variables reinforces the need to utilize exploratory grid searches to identify select configurations that exhibit higher SEA under given loading conditions. In future work, gradients of the GRU model could be utilized to define an inverse design problem and generate new designs. In the current work, periodic boundary conditions were not enforced on the representative volume when comparing different structures. The effect of enforcing periodic boundary conditions could also be explored in future work. Lastly, the structures analyzed in this study using FEA were not compared against the experimental results, a key limitation of the current work. Hence, an experimental validation of the FE simulation model would provide further insights into the model accuracy and the energy absorption capabilities of the low-porosity structures. ## Data availability The data and source code that support the findings of this study will be available upon request during the review process, and it will be open source after the publication is online. ## Conflict of interest The authors declare that they have no conflict of interest. ## Acknowledgements We acknowledge the support of the National Science Foundation grant (MOMS-1926353) and the Army Research Office contract (No. W 911NF-18-2-0067). ## CRediT author contributions **Shashank Kushwaha**: Conceptualization, Methodology, Software, Formal analysis, Investigation, Data Curation, Writing - Original Draft. **Junyan He**: Methodology, Software, Formal analysis, Writing - Original Draft. **Diab Abueidda**: Supervision, Writing - Review & Editing. **Iwona Jasiuk**: Supervision, Resources, Writing - Review & Editing, Funding Acquisition.
2305.08590
NIKI: Neural Inverse Kinematics with Invertible Neural Networks for 3D Human Pose and Shape Estimation
With the progress of 3D human pose and shape estimation, state-of-the-art methods can either be robust to occlusions or obtain pixel-aligned accuracy in non-occlusion cases. However, they cannot obtain robustness and mesh-image alignment at the same time. In this work, we present NIKI (Neural Inverse Kinematics with Invertible Neural Network), which models bi-directional errors to improve the robustness to occlusions and obtain pixel-aligned accuracy. NIKI can learn from both the forward and inverse processes with invertible networks. In the inverse process, the model separates the error from the plausible 3D pose manifold for a robust 3D human pose estimation. In the forward process, we enforce the zero-error boundary conditions to improve the sensitivity to reliable joint positions for better mesh-image alignment. Furthermore, NIKI emulates the analytical inverse kinematics algorithms with the twist-and-swing decomposition for better interpretability. Experiments on standard and occlusion-specific benchmarks demonstrate the effectiveness of NIKI, where we exhibit robust and well-aligned results simultaneously. Code is available at https://github.com/Jeff-sjtu/NIKI
Jiefeng Li, Siyuan Bian, Qi Liu, Jiasheng Tang, Fan Wang, Cewu Lu
2023-05-15T12:13:24Z
http://arxiv.org/abs/2305.08590v1
# NIKI: Neural Inverse Kinematics with Invertible Neural Networks ###### Abstract With the progress of 3D human pose and shape estimation, state-of-the-art methods can either be robust to occlusions or obtain pixel-aligned accuracy in non-occlusion cases. However, they cannot obtain robustness and mesh-image alignment at the same time. In this work, we present NIKI (**N**eural **I**nverse **K**i_nematics with **I**nvertible Neural Network), which models bi-directional errors to improve the robustness to occlusions and obtain pixel-aligned accuracy. NIKI can learn from both the forward and inverse processes with invertible networks. In the inverse process, the model separates the error from the plausible 3D pose manifold for a robust 3D human pose estimation. In the forward process, we enforce the zero-error boundary conditions to improve the sensitivity to reliable joint positions for better mesh-image alignment. Furthermore, NIKI emulates the analytical inverse kinematics algorithms with the twist-and-swing decomposition for better interpretability. Experiments on standard and occlusion-specific benchmarks demonstrate the effectiveness of NIKI, where we exhibit robust and well-aligned results simultaneously. Code is available at [https://github.com/Jeff-sju/NIKI](https://github.com/Jeff-sju/NIKI). ## 1 Introduction Recovering 3D human pose and shape (HPS) from monocular input is a challenging problem. It has many applications [10, 62, 64, 65, 47, 11, 46]. Despite the rapid progress powered by deep neural networks [18, 22, 23, 25, 30, 68], the performance of existing methods is not satisfactory in complex real-world applications where people are often occluded and truncated by themselves, each other, and objects. Existing state-of-the-art approaches rely on pixel-aligned local evidence, _e.g_., 3D keypoints [15, 30], mesh vertices [41], and mesh-aligned features [68], to perform accurate human pose and shape estimation. Although the local evidence helps obtain high accuracy in standard benchmarks, it fails when the mesh-image correspondences are unavailable due to occlusions and truncations. These pixel-aligned approaches sacrifice robustness to occlusions for high accuracy in non-occlusion scenarios. On the other hand, direct regression approaches are more robust to occlusions. Such approaches directly predict a set of pose and shape parameters with neural networks. By encoding human body priors in the networks, they predict a more physiologically plausible result than the pixel-aligned approaches in severely occluded scenarios. However, direct regression approaches use all pixels to predict human pose and shape, which is a highly non- Figure 1: **Trade-off between pixel-aligned accuracy and robustness.** From 3DPW to 3DPW-XOCC, the degree of occlusion increases. The pixel-aligned approach performs well only in non-occlusion cases. The direct regression approach is more robust to occlusions but less accurate in non-occlusion cases. NIKI shows high accuracy and strong robustness simultaneously. Illustrative results on the 3DPW-XOCC dataset are shown on the right. fers from image-mesh misalignment. Recent work [20, 23] adopts guided attention to leverage local evidence for better alignment. Nevertheless, in non-occlusion scenarios, direct regression approaches are still not as accurate as the pixel-aligned approaches that explicitly model the local evidence. Fig. 1 shows the performance of the state-of-the-art pixel-aligned and regression approaches in scenarios with different levels of occlusions. These two types of approaches cannot achieve mesh-image alignment and robustness at the same time. In this work, we propose NIKI, a Neural Inverse Kinematics (IK) algorithm with Invertible neural networks, to improve robustness to occlusions while maintaining pixel-aligned accuracy. IK algorithms are widely adopted in pixel-aligned approaches [15, 30] to obtain mesh-image alignment in non-occlusion scenarios. However, existing IK algorithms only focus on estimating the body part rotations that best explain the joint positions but do not consider the plausibility of the estimated poses. Therefore, the output human pose inherits the errors from joint position estimation, which is especially severe in occlusion scenarios. In contrast, NIKI is robust to unreliable joint positions by modeling the bi-directional pose error. We build the bijective mapping between the Euclidean joint position space and the combined space of the 3D joint rotation and the latent error. The latent error indicates how the joint positions deviate from the manifold of plausible human poses. The output rotations are robust to erroneous joint positions since we have explicitly removed the error information by supervising the output marginal distribution in the inverse direction. In the forward direction, we introduce the zero-error boundary conditions, which enforce the solved rotations to explain the reliable joint positions and improve mesh-image alignment. The invertible neural network (INN) is trained in both forward and inverse directions. Since forward kinematics (FK) is deterministic and easy to understand, it aids the INN in learning the complex IK process through inherent bijective mapping. To further improve the interpretability of the IK network, we emulate the analytical IK algorithm by decomposing the complete rotation into the twist rotation and the swing-dependent joint position with two consecutive invertible networks. We benchmark NIKI on 3DPW [56], AGORA [44], 3DOH [69], 3DPW-OCC [56], and our proposed 3DPW-XOCC datasets. 3DPW-XOCC is augmented from the original 3DPW dataset with extremely challenging occlusions and truncations. NIKI shows robust reconstructions while maintaining pixel-aligned accuracy, demonstrating state-of-the-art performance in both occlusions and non-occlusion benchmarks. The main contributions of this paper are summarized as follows: * We present a framework with a novel error-aware inverse kinematics algorithm that is robust to occlusions while maintaining pixel-aligned accuracy. * We propose to decouple the error information from plausible human poses by learning a pose-independent error embedding in the inverse process and enforcing zero-error boundary conditions during the forward process using invertible neural networks. * Our approach outperforms previous pixel-aligned and direct regression approaches on both occlusions and non-occlusion benchmarks. ## 2 Related Work 3D Human Pose and Shape Estimation.Prior work estimates 3D human pose and shape by outputting the parameters of statistical human body models [1, 36, 43, 45, 63]. Initial work follows the optimization paradigm [6, 27, 45, 52]. SMPLify [6] is the first automated approach that fits SMPL parameters to 2D keypoint observations. This paradigm is further extended to silhouette [27] and volumetric grids [52]. Recently, learning-based paradigms have gained much attention with the advances in deep neural networks. Existing work can be categorized into two classes: direct regression approaches and pixel-aligned approaches. Direct regression approaches use deep neural networks to regress the pose and shape parameters directly [18, 19, 22, 23, 24, 25, 28, 58]. Intermediate representations are used as the weak supervision to improve the regression performance,, 2D keypoints [18] and body/part segmentation [46]. Several studies [17, 25] leverage the optimization paradigm to introduce the pseudo ground truth for better supervision. Pixel-aligned approaches explicitly exploit pixel-aligned local evidence to estimate the pose and shape parameters. Moon [41] use the vertex positions to regress the SMPL parameters. Li [30] and Iqbal [15] propose to map the 3D keypoints to pose parameters. Zhang [68] propose the mesh-aligned feedback loop to predict the aligned SMPL parameters. Explicitly modeling local evidence contributes to the state-of-the-art performance of pixel-aligned approaches. Although pixel-aligned approaches achieve high accuracy in standard benchmarks, they are vulnerable to occlusions and truncations. When the local evidence is not reliable or even does not exist in occluded and truncated cases, such approaches predict physiologically implausible results. Direct regression approaches [16, 20, 23, 50, 69] are more robust to occlusions and truncations but less accurate in non-occlusion scenarios. Zhang [69] use the saliency map to infer object-occluded human bodies. Kocabas [23] propose part-guided attention to exploit the information about the visibility of body parts. Khirodkar [20] use body centernaps to exploit the spatial context. A number of studies [45, 48, 51] propose to use pose prior to improve the plausibility of the estimated poses. Although the local evidence is implicitly used in recent regression approaches, pixel-aligned approaches still dominate non-occlusion benchmarks. In this work, we combine the merits of pixel-aligned approaches and direct regression approaches. NIKI maintains pixel-aligned accuracy by aligning with the body joints via inverse kinematics while achieving robustness to occlusions and truncations with bi-directional error decoupling. Inverse Kinematics.The inverse kinematics (IK) process finds the relative rotations to produce the desired positions of body joints. It is an ill-posed problem because of the information loss in the forward process. Traditional numerical approaches [4, 7, 12, 21, 57, 60] are time-consuming due to iterative optimization. The heuristic approaches such as CDC [37], FABRIK [3], and IK-FA [49] are more efficient and have a lower computation cost for each heuristic iteration. Recent work [9, 54] has started using neural networks to solve the IK problem. Zhou [70] train a four-layer MLP network to predict the 3D human pose parameterized as 6D vectors. Li [30] propose a hybrid analytical-neural solution to accurately predict the body part rotations. Oreshkin [42] propose to use prototype encoding to predict rotations from sparse user inputs. Voleti [55] extend the same model to arbitrary skeletons. The work of Ardizzone [2] is most related to us. They use invertible neural networks (INNs) to solve inverse problems, including the toy inverse kinematics problem in 2D space. However, similar to all the aforementioned approaches, they assume the input body joints are reliable, resulting in vulnerability to occlusions and truncations. Invertible Neural Network in HPS Estimation.Modeling the conditional posterior of an inverse process is a classical statistical task. Wehrbein [59] propose to estimate 3D human poses from 2D poses by capturing lost information with INNs. Several studies [63, 66] leverage INNs to build priors for 3D human pose estimation. The pose priors are learned by normalizing flows that are built with INNs. Biggs [5] propose to use the learned prior from normalizing flows to resolve ambiguities. Kolotouros [26] propose a conditional distribution with normalizing flows as a function of the input to combine information from different sources. Li [29] leverage normalizing flows to capture the underlying residual log-likelihood of the output and propose a novel regression paradigm from the perspective of maximum likelihood estimation. Unlike previous methods, our approach leverages the property of bijective mapping in INNs to decouple joint errors and solve the inverse kinematics problem robustly. ## 3 Method In this section, we present NIKI, a neural inverse kinematics solution for 3D human pose and shape estimation. We first review the formulation of existing analytical and MLP-based IK algorithms in SS3.1. In SS3.2, we introduce the proposed INN-based IK algorithm with bi-directional error decoupling. In SS3.3, we present the overall human pose estimation framework and the learning objective. Then we elaborate on the proposed IK-specific invertible architecture in SSA. Finally, we provide the necessary implementation details in SS3.5 ### Preliminaries The IK process is to find the corresponding body part rotations that explain the input body joint positions, while the forward kinematics (FK) process computes the desired joint positions based on the input rotations. The FK process is well-defined, but the transformation from joint rotations to joint positions incurs an information loss,, multiple rotations could correspond to one position, resulting in an ill-posed IK process. Here, we follow HybrIK [30] to consider twist rotations for information integrity. The conventional IK algorithms only require the output rotations to match the input joint positions but ignore the errors of the joint positions and the plausibility of the body pose. Therefore, the errors of the joint positions will be accumulated in the joint rotations (Fig. 1(a)). This process Figure 2: **Illustration** of (a) analytical IK, (b) feedforward MLP-based IK, and (c) NIKI with bi-directional error decoupling. can be formulated as: \[\underbrace{\mathbf{R}+\epsilon_{r}}_{\text{erroneous output}}=\text{IK}_{ \text{Analytical}}(\underbrace{\mathbf{p}+\epsilon_{p},\boldsymbol{\phi}+\epsilon_ {\phi}}_{\text{erroneous input}}\mid\boldsymbol{\beta}), \tag{1}\] where \(\mathbf{R}\) denotes the underlying plausible rotations, \(\epsilon_{r}\) denotes the accumulated error in estimated rotations, \(\mathbf{p}\) denotes the underlying plausible joint positions, \(\epsilon_{p}\) denotes the position errors, \(\boldsymbol{\phi}\) denotes the underlying plausible twist rotations, \(\epsilon_{\phi}\) denotes the twist error, and \(\boldsymbol{\beta}\) denotes the body shape parameters. A straightforward solution to improving the robustness of the IK algorithms is using the standard regression model [9, 69] to approximate the underlying plausible rotations \(\mathbf{R}\) given the erroneous input (Fig. 2b): \[\mathbf{R}\approx\text{IK}_{\text{MLP}}(\mathbf{p}+\epsilon_{p},\boldsymbol {\phi}+\epsilon_{\phi}\mid\boldsymbol{\beta}). \tag{2}\] Indeed, modeling the IK process with classical neural networks, _e.g_., MLP, can improve the robustness. However, the output rotations are less sensitive to the change of the joint positions. The errors are highly coupled with the joint positions. Without explicitly decoupling errors from plausible human poses, it is difficult for the network to distinguish between reasonable and abnormal changes in joint positions. Therefore, the output rotations cannot accurately track the movement of the body joints. In practice, we find that the feedforward neural networks could improve performance in occlusion cases but cause performance degradation in non-occlusion cases, where accurate mesh-image alignment is required. Detailed comparisons are provided in Tab. 5. ### Inverse Kinematics with INNs In this work, to improve the robustness of IK to occlusions while maintaining the sensitivity to non-occluded body joints, we propose to use the invertible neural network (INN) to model bi-directional errors explicitly (see Fig. 2c). In contrast to the conventional methodology, we learn the IK model \(g(\cdot;\boldsymbol{\beta},\theta)\) jointly with the FK model \(f(\cdot;\boldsymbol{\beta},\theta)\): \[[\mathbf{p}+\epsilon_{p},\boldsymbol{\phi}+\epsilon_{\phi}] =f(\mathbf{R},\mathbf{z}_{r};\boldsymbol{\beta},\theta), \tag{3}\] \[=g(\mathbf{p}+\epsilon_{p},\boldsymbol{\phi}+\epsilon_{\phi}; \boldsymbol{\beta},\theta), \tag{4}\] where \(\mathbf{z}_{r}\) is the error embedding that denotes how the input joint positions deviate from the manifold of plausible human poses. Notice that \(f\) and \(g\) share the same parameters \(\theta\), and \(f=g^{-1}\) is enforced by the invertible network architecture. We expect that simultaneously learning the FK and IK processes can benefit each other. In the forward process, we can tune the error embedding \(\mathbf{z}_{r}\) to control the error level of the body joint positions. The body part rotations will perfectly align with the joint positions by setting \(\mathbf{z}_{r}\) to \(0\), which means no deviation from the pose manifold. In the inverse process, the error is only reflected on \(\mathbf{z}_{r}\). The rotation \(\mathbf{R}\) keeps stable against the erroneous input. Decouple Error Information.The input joints and twists to the IK process contain two parts of information: i) the underlying pose that lies on the manifold of plausible 3D human poses; ii) the error information that indicates how the input deviates from the manifold. We can obtain robust pose estimation by separating these two types of information. Due to the bijective mapping enforced by the INN, all the input information is preserved in the output, and no new information is introduced. Therefore, we only need to remove the pose information from the output vector \(\mathbf{z}_{r}\) in the inverse process. The vector \(\mathbf{z}_{r}\) will automatically encode the remaining error information. To this end, we enforce the model to follow the independence constraint, which encourages \(\mathbf{R}\) and \(\mathbf{z}_{r}\) to be independent upon convergence, _i.e_., \(p(\mathbf{z}_{r}|\mathbf{R})=p(\mathbf{z}_{r})\). After we separate the error information, we can manipulate the error embedding to let the model preserve the sensitivity to error-free body joint positions without compromising robustness. In particular, we constrain the error information in the forward process with the zero-error condition: \[[\mathbf{p},\phi]=f(\mathbf{R},\boldsymbol{0};\boldsymbol{\beta},\theta). \tag{5}\] In this way, the rotations will track the joint positions and twist rotations accurately in non-occlusion scenarios. Besides, the zero-error condition can also be extended to the inverse process: \[[\mathbf{R},\boldsymbol{0}]=g(\mathbf{p},\boldsymbol{\phi};\boldsymbol{\beta },\theta). \tag{6}\] With the independence and zero-error constraints, the network is able to model the error information in both the forward and inverse processes, making NIKI robust to occlusions while maintaining pixel-aligned accuracy. Figure 3: **Overview of the proposed framework.** The input image is fed into the CNN backbone network to estimate the initial joint positions and twist rotations, followed by NIKI to solve the joint rotations. ### Decoupled Learning The overview of our approach is illustrated in Fig. 3. During inference, we first extract the joint positions and twist rotations with the CNN backbone, which are subsequently fed to the invertible network to predict the complete body part rotations. During training, we optimize FK and IK simultaneously in one network. Hereby, we perform FK and IK alternately with the additional independence loss and boundary loss. The gradients from both directions are accumulated before performing a parameter update. Inverse Training.In the inverse iteration, the network predicts the body part rotations given the joint positions \(\hat{\mathbf{p}}\) and twist rotations \(\hat{\boldsymbol{\phi}}\) from the CNN backbone. The loss function is defined as: \[\mathcal{L}_{inv}=\left\|\hat{\mathbf{R}}_{inv}-\widetilde{\mathbf{R}}\right\| _{2}^{2}+\left\|\text{FK}(\hat{\mathbf{R}}_{inv})-\text{FK}(\widetilde{\mathbf{ R}})\right\|_{1}, \tag{7}\] with \[[\hat{\mathbf{R}}_{inv},\hat{\mathbf{z}}_{r}]=g(\hat{\mathbf{p}},\hat{ \boldsymbol{\phi}};\boldsymbol{\beta},\theta), \tag{8}\] where \(\widetilde{\mathbf{R}}\) represents the ground-truth rotations, and \(\text{FK}(\cdot)\) denotes the analytical FK process to superoise the corresponding 3D joint positions of the predicted pose. Forward Training.In the forward process, the network predicts the joint positions and twist rotations given the body part rotations. The error of the noisy predictions \(\hat{\mathbf{p}}\) and \(\hat{\boldsymbol{\phi}}\) should only be determined by the error embedding. Therefore, with the ground-truth rotations \(\widetilde{\mathbf{R}}\) and error embedding \(\hat{\mathbf{z}}_{r}\) obtained from the inverse iteration, the forward model should predict the same values as the CNN output: \[\mathcal{L}_{fund}=\|\hat{\mathbf{p}}_{fund}-\hat{\mathbf{p}}\|_{1}+\|\hat{ \boldsymbol{\phi}}_{fund}-\hat{\boldsymbol{\phi}}\|_{2}^{2}, \tag{9}\] with \[[\hat{\mathbf{p}}_{fund},\hat{\boldsymbol{\phi}}_{fund}]=f(\widetilde{\mathbf{ R}},\hat{\mathbf{z}}_{r};\boldsymbol{\beta},\theta). \tag{10}\] Independence Loss.The latent error vector is learned in an unsupervised manner by making \(\mathbf{R}_{inv}\) and \(\mathbf{z}_{r}\) independent of each other. The pose information in \(\mathbf{R}_{inv}\) is supervised by Eq. 7. We then enforce the independence by penalizing the mismatch between the joint distribution of the rotations and error embedding \(q\big{(}\hat{\mathbf{R}}_{inv},\mathbf{z}_{r}\big{)}\) and the product of marginal distributions \(p\big{(}\widetilde{\mathbf{R}}\big{)}p(\mathbf{z})\). \[\mathcal{L}_{ind}=\mathcal{D}(q\big{(}\mathbf{R}_{inv},\mathbf{z}_{r}\big{)},p \big{(}\widetilde{\mathbf{R}}\big{)}p(\mathbf{z})), \tag{11}\] where \(\mathbf{z}\sim\mathcal{N}(0,\mathbf{I})\) follows the standard normal distribution, \(\mathbf{I}\) is the identity matrix, and \(\mathcal{D}(\cdot)\) denotes the Maximum Mean Discrepancy [13], which allows us to compare two probability distributions through samples. In addition to the independence constraint, \(\mathcal{L}_{ind}\) encourages the error embedding \(\mathbf{z}_{r}\) to follow the standard normal distribution \(p(\mathbf{z})\), serving as a regularization. Boundary Condition Loss.To enforce the solved rotations to explain the reliable joint positions, we supervise the boundary cases where no error occurs. In the inverse process, the output error should be zero when the network is fed with the ground truth: \[\mathcal{L}_{ind}^{i}=\|\hat{\epsilon}_{r}\|_{2}^{2}+\|\hat{\mathbf{R}}_{{ \textit{bnd}}}-\widetilde{\mathbf{R}}\|_{2}^{2}, \tag{12}\] with \[[\hat{\mathbf{R}}_{{\textit{bnd}}},\hat{\epsilon}_{r}]=g(\widetilde{\mathbf{p} },\widetilde{\boldsymbol{\phi}};\boldsymbol{\beta},\theta), \tag{13}\] where \(\widetilde{\mathbf{p}}\) and \(\widetilde{\boldsymbol{\phi}}\) denote the ground-truth joint positions and twist rotations, respectively. In the forward process, the joint positions and twist rotations should map to the ground truth when the input error vector \(\mathbf{z}_{r}\) is \(\mathbf{0}\): \[\mathcal{L}_{{\textit{bnd}}}^{\prime}=\|\hat{\mathbf{p}}_{{\textit{bnd}}}- \widetilde{\mathbf{p}}\|_{1}+\|\hat{\boldsymbol{\phi}}_{{\textit{bnd}}}- \widetilde{\boldsymbol{\phi}}\|_{2}^{2}, \tag{14}\] with \[[\hat{\mathbf{p}}_{{\textit{bnd}}},\hat{\boldsymbol{\phi}}_{{\textit{bnd}}}]= f(\widetilde{\mathbf{R}},\mathbf{0};\boldsymbol{\beta},\theta). \tag{15}\] Overall, the total loss of NIKI is: \[\begin{split}\mathcal{L}=&\lambda_{{\textit{imr}}} \mathcal{L}_{{\textit{imr}}}+\lambda_{{\textit{fru}}}\mathcal{L}_{{\textit{ fru}}}\\ +&\lambda_{{\textit{ind}}}\mathcal{L}_{{\textit{ind} }}+\lambda_{{\textit{bnd}}}^{i}\mathcal{L}_{{\textit{bnd}}}^{i}+\lambda_{{ \textit{bnd}}}^{\prime}\mathcal{L}_{{\textit{bnd}}}^{\prime},\end{split} \tag{16}\] where \(\lambda_{{\textit{imr}}},\lambda_{{\textit{fru}}},\lambda_{{\textit{ind}}}, \lambda_{{\textit{ind}}},\lambda_{{\textit{ind}}}^{i},\lambda_{{\textit{ bnd}}}^{\prime}\) are the scalar coefficients to balance the loss terms. ### Invertible Architecture One-Stage Mapping.To build a fully invertible neural network for inverse kinematics, we build the one-stage mapping model using RealNVP [11]. Since the IK and FK processes require the skeleton template, we extend the INN to incorporate the conditional shape parameters input. The basic block of the network contains two reversible coupling layers conditioned on the shape parameters. The overall network consists of multiple blocks connected in series to increase capacity. Besides, since the invertible network requires the input and output vectors to have the same dimension, we follow previous work [2] and pad zeros to the input. Twist-and-Swing Mapping.Although treating the invertible neural network as a black box can let us model both the FK and IK processes at the same time, we further emulate the analytical IK algorithm to improve the performance and interpretability. Specifically, we follow the twist-and-swing decomposition [30] and divide the IK process into two steps: i) from joint positions to swing rotations; ii) from twist and swing rotations to complete rotations. The two-step mapping is implemented by two separate invertible networks: \[[\mathbf{R}_{\textit{sw}},\mathbf{z}_{\textit{sw}}] =g_{1}(\mathbf{p}+\epsilon_{p};\boldsymbol{\beta},\theta_{1}), \tag{17}\] \[[\mathbf{R},\mathbf{z}_{r}] =g_{2}(\mathbf{R}_{\textit{sw}},\boldsymbol{\phi}+\epsilon_{ \phi};\theta_{2}), \tag{18}\] where \(\mathbf{R}_{sw}\) is the swing rotations, and \(\mathbf{z}_{sw}\) indicates the deviation from the plausible swing rotation manifold. Since the mappings are bijective, the FK process also follows the twist-and-swing procedure but inversely. We have \(f=f_{1}\circ f_{2}=g_{1}^{-1}\circ g_{2}^{-1}=g^{-1}\). In the FK process, the body part rotations are first decomposed into twist and swing rotations. Then the swing rotations are transformed into the joint positions. The intermediate supervision of swing rotations is used in both the forward and inverse training. Temporal Extension.The invertible framework is flexible. It can be easily extended to solving the IK problem with temporal inputs. The model with static inputs can only identify the errors related to physiological implausibility. In contrast, the temporal model further improves motion smoothness by decoupling errors of implausible human body movements. More details are provided in the supplementary material. ### Implementation Details. We adopt HybrIK [30] as the CNN backbone to predict the noisy body joint positions and the twist rotations. The input of the IK model includes the joint positions \(\mathbf{p}\in\mathbb{R}^{3K}\), twist rotations parameterized in 2-dimensional vectors, _i.e_., \(\boldsymbol{\phi}\in\mathbb{R}^{2(K-1)}\), and the confidence scores of each joint \(\mathbf{s}\in\mathbb{R}^{K}\), where \(K\) denotes the total number of human body joints. The output of the model consists of the body part rotations parameterized as a 6D vector for each part, _i.e_., \(\mathbf{R}\in\mathbb{R}^{6K}\), and the error embedding \(\mathbf{z}_{r}\in\mathbb{R}^{D_{z}}\). The IK model is conditioned on the shape parameters \(\boldsymbol{\beta}\), which is also predicted by the CNN backbone. We pad the input with a zero vector with the dimension \(M=D_{z}+2\) to satisfy the dimension constraint of the invertible neural network. The networks are trained with the Adam solver for 50 epochs with a mini-batch size of \(64\). The learning rate is set to \(1\times 10^{-3}\) at first and reduced by a factor of 10 at the 30th and 40th epochs. Implementation is in PyTorch. Detailed architectures are provided in the supplementary material. ## 4 Experiments Datasets.We employ the following datasets in our experiments: (1) 3DPW [56], an outdoor benchmark for 3D human pose estimation. (2) AGORA [44], a synthetic dataset with challenging environmental occlusions. (3) 3DOH [69], a 3D human dataset where human activities are occluded by objects. (4) 3DPW-OCC [56], a different split of the original 3DPW dataset with a occluded test set. (5) 3DPW-XOCC, a new benchmark for 3D human pose estimation with extremely challenging occlusions and truncations. We simulate occlusions and truncations by randomly pasting occlusion patches and cropping frames with truncated windows. Training and Evaluation.NIKI is trained on 3DPW [56] and Human3.6M [14] and evaluated on 3DPW [56] and 3DPW-XOCC [56] to benchmark the performance on both occlusions and non-occlusion scenarios. We use the AGORA [44] train set only when conducting experiments on its test set. For evaluations on the 3DPW-OCC [56] and 3DOH [69] datasets, we train NIKI on COCO [35], Human3.6M [14], and 3DOH [69] for a fair comparison. Procrustes-aligned mean per joint position error (PA-MPJPE) and mean per joint position error (MPJPE) are reported to assess the 3D pose accuracy. Per vertex error (PVE) is also reported to evaluate the estimated body mesh. significantly outperforms the most accurate direct regression approach by 5.9 mm on PA-MPJPE (12.7% relative improvement). Besides, NIKI obtains comparable performance to pixel-aligned approaches, showing a 1.2 mm improvement on PA-MPJPE. Tab. 2 demonstrates the robustness of NIKI to extreme occlusions and truncations. We report the results of the most accurate pixel-aligned and direct regression approaches on the 3DPW-XOCC dataset. It shows that direct regression approaches outperform pixel-aligned approaches in challenging scenes, which is in contrast to the results in the standard benchmark. NIKI improves the PA-MPJPE performance by **38.7**% compared to HybrIK and **10.1**% compared to PARE that finetuned on the 3DPW-XOCC train set. The results of NIKI on other occlusion-specific datasets are reported in Tab. 3 and Tab. 4. NIKI shows consistent improvements on all these datasets, demonstrating that NIKI is robust to challenging occlusions and truncations while maintaining pixel-aligned accuracy. Specifically, NIKI improves the NMIE performance on AGORA by **12.5**% compared to the state-of-the-art methods. We can also observe that the twist-and-swing mapping model is consistently superior to the one-stage mapping model. More discussions of limitations and future work are provided in the supplementary material. Effectiveness of Bi-directional Training.To further validate the effectiveness of bi-directional training, we report the results of the baseline model that is only trained with the inverse process. Without bi-directional training, we also cannot apply the boundary condition in the forward direction, which means that we only decouple the errors in the inverse process. As shown in Tab. 5, the IK model cannot maintain the sensitivity to non-occluded body joints in the standard benchmark without forward training. Sensitivity Analysis.We further follow Kocabas _et al_. [23] to conduct the occlusion sensitivity analysis. Fig. 4 shows the per-joint breakdown of the mean 3D error from the occlusion sensitivity analysis for three different methods on the 3DPW test split. Although HybrIK [30] obtains high accuracy on the 3DPW dataset, it is quite sensitive to occlusions. NIKI is more robust to occlusions and improves the robustness of all joints. We also qualitatively compare HybrIK, PARE, and NIKI in Fig. 5. NIKI performs well in challenging occlusion scenarios and predicts well-aligned results. More occlusion analyses and qualitative samples are provided in the supplementary material. ## 5 Conclusion In this paper, we propose NIKI, a neural inverse kinematics solution for accurate and robust 3D human pose and shape estimation. NIKI is built with invertible neural networks to model bi-directional error information in the forward and inverse kinematics processes. In the inverse direction, NIKI explicitly decouples the error information from the manifold of the plausible human poses to improve robustness. In the forward direction, NIKI enforces zero-error boundaries to obtain accurate mesh-image alignment. We construct the invertible neural network by emulating the analytical inverse kinematics algorithm with twist-and-swing decomposition to improve interpretability. Comprehensive experiments on standard and occlusion-specific datasets demonstrate the pixel-aligned accuracy and robustness of NIKI. We hope NIKI can serve as a solid baseline for challenging real-world applications. **Acknowledgments.** This work was supported by the National Key R&D Program of China (No. 2021ZD0110704), Shanghai Municipal Science and Technology Major Project (2021SHZDZX0102), Shanghai Qi Zhi Institute, and Shanghai Science and Technology Commission (21511101200). Figure 5: **Quantitative results on COCO (rows 1-3) and 3DPW-XOCC (rows 4-5) datasets. From left to right: Input image, (a) HybrIK [30] results, (b) PARE [23] results, and (c) NIKI results.** ## Appendix A Architecture of INN ### One-Stage Mapping The detailed architecture of the one-stage mapping model is illustrated in Fig. 6. We follow the architecture of RealNVP [11]. The model consists of multiple basic blocks to increase capacity. The input vector \(\mathbf{u}\) of the block is split into two parts, \(\mathbf{u}_{1}\) and \(\mathbf{u}_{2}\), which are subsequently transformed with coefficients \(\exp(s_{i})\) and \(t_{i}\) (\(i\in\{1,2\}\)) by the two affine coupling layers: \[\mathbf{v}_{1} =\mathbf{u}_{1}\odot\exp(s_{2}(\mathbf{u}_{2},\boldsymbol{ \beta}))+t_{2}(\mathbf{u}_{2},\boldsymbol{\beta}), \tag{19}\] \[\mathbf{v}_{2} =\mathbf{u}_{2}\odot\exp(s_{1}(\mathbf{v}_{1},\boldsymbol{\beta }))+t_{1}(\mathbf{v}_{1},\boldsymbol{\beta}), \tag{20}\] where \(\mathbf{v}=[\mathbf{v}_{1},\mathbf{v}_{2}]\) is the output vector of the block and \(\odot\) denotes element-wise multiplication. The coefficients of the affine transformation can be learned by arbitrarily complex functions, which do not need to be invertible. The invertibility is guaranteed by the affine transformation in Eq. 19 and 20. The scale network \(s_{i}\) is a 3-layer MLP with the hidden dimension of \(512\), and the translation network \(t_{i}\) has the same architecture followed by a \(\tanh\) activation function. ### Twist-and-Swing Mapping The detailed architecture of the twist-and-swing mapping model is illustrated in Fig. 7. The two-step mapping is implemented by two separate invertible networks. The first network has the same architecture as the one-stage mapping model, while its input is only the joint positions, and the output is the swing rotations. The second network removes the shape condition and directly transforms the twist and swing rotations to complete rotations. ## Appendix B Implementation Details In our experiments, we use the weights pretrained on COCO[35] 2D pose estimation task for the initialization of the CNN backbone to accelerate convergence. The scalar coefficients in the loss function are \(\lambda_{\textit{inv}}=1\), \(\lambda_{\textit{fwd}}=1\), \(\lambda_{\textit{ind}}=1\), \(\lambda_{\textit{ind}}^{i}=0.1\), \(\lambda_{\textit{ind}}^{f}=1\). We first train the CNN backbone following HybridK [30] to obtain initial joint positions and twist rotations. Then we solely train NIKI and freeze the parameters of the CNN backbone. During training, we follow EFT [17], SPIN [25], and PARE [23], which use fixed data sampling ratios for each batch. We incorporate 50% Human3.6M and 50% 3DPW when conducting experiments on the 3DPW and 3DPW-XOCC datasets. For experiments on the 3DPW-OCC and 3DOH datasets, we incorporate 35% COCO, 35% Human3.6M, and 30% 3DOH. ## Appendix C Temporal Extension of NIKI ### Architecture We extend the invertible network for temporal input. We design a spatial-temporal INN model to incorporate temporal information to solve the IK problem. For simplicity, we use the basic block in the one-stage mapping and twist-and-swing mapping models as the spatial INN. Self-attention modules are introduced to serve as the temporal INN and conduct temporal affine transformations. The temporal input vectors \(\{\mathbf{u}^{t}\}_{1}^{T}\) are split into two subsets, \(\{\mathbf{u}^{t}\}_{1}^{\left[T/2\right]}\) and \(\{\mathbf{u}^{t}\}_{\left[T/2\right]+1}^{T}\), which are subsequently transformed with coefficients \(\exp(s_{i})\) and \(t_{i}\) (\(i\in\{1,2\}\)) by the two affine coupling layers like Eq. 19 and 20. We adopt self-attention layers [53] as the temporal scale and translation layers. The detailed network architecture of the temporal INN is illustrated in Fig. 8. ### Experiments of the Temporal Extension We evaluate the temporal extension on both standard and occlusion-specific benchmarks. Tab. 6 compares temporal NIKI with previous state-of-the-art temporal HPS methods on the standard 3DPW[56] dataset. Notice that we do not design complex network architecture or use dynamics infor Figure 6: **Detailed architecture of the one-stage mapping model.** Figure 7: **Detailed architecture of the twist-and-swing mapping model.** mation. Our temporal extension simply applies the affine coupling layers to the time domain. It shows that our simple extension obtains better accuracy than state-of-the-art dynamics-based approaches. Tab. 7 presents the performance on the occlusion-specific benchmark. We compare the temporal extension with a strong baseline. The baseline combines PARE [23] with the state-of-the-art temporal approach, VIBE [22]. We first use the backbone of PARE [23] to extract attention-guided features. Then we apply VIBE [22] to incorporate temporal information to predict smooth and robust human motions. Temporal NIKI outperforms the baseline in challenging occlusions and truncations. Fig. 9 present the acceleration error curves of the single-frame and temporal models in the 3DPW-XOCC dataset. We can observe that the temporal model can improve motion smoothness. ## Appendix D Noise Analysis We assess the robustness of three different IK algorithms: analytical IK, MLP-based IK, and NIKI. We evaluate their performance on the AMASS dataset [39] with noisy \begin{table} \begin{tabular}{l|c c c c} \hline \hline & \multicolumn{4}{c}{3DPW} \\ \cline{2-5} Method & MPIPE \(\downarrow\) & PA-MPIPE \(\downarrow\) & PVE \(\downarrow\) & ACCEL \(\downarrow\) \\ \hline VIBE [22] & 82.9 & 51.9 & 99.1 & 23.4 \\ MEVA [38] & 86.9 & 54.7 & - & 11.6 \\ TCMR [8] & 86.5 & 52.7 & 102.9 & 7.1 \\ MAED [58] & 79.1 & 45.7 & 92.6 & 17.6 \\ D&D [28] & 73.7 & 42.7 & 88.6 & **7.0** \\ \hline NIKI (Frame-based) & 71.3 & 40.6 & 86.6 & 15.1 \\ NIKI (Temporal) & **71.2** & **40.5** & **86.3** & 12.3 \\ \hline \hline \end{tabular} \end{table} Table 6: **Quantitative comparisons with state-of-the-art temporal methods on the 3DPW dataset. Symbol “-” means results are not available.** \begin{table} \begin{tabular}{l|c c c c} \hline \hline & \multicolumn{4}{c}{3DPW-XOCC} \\ \cline{2-5} Method & MPIPE \(\downarrow\) & PA-MPIPE \(\downarrow\) & PVE \(\downarrow\) & ACCEL \(\downarrow\) \\ \hline HybrIK [30] & 148.3 & 98.7 & 164.5 & 108.6 \\ PARE\({}^{*}\)[23] & 114.2 & 67.7 & 133.0 & 90.7 \\ PARE\({}^{*}\)[23] + VIBE [22] & 97.3 & 60.2 & 114.9 & 18.3 \\ \hline NIKI (Frame-based) & 110.7 & 60.5 & 128.6 & 74.4 \\ NIKI (Temporal) & **88.9** & **52.1** & **98.0** & **17.3** \\ \hline \hline \end{tabular} \end{table} Table 7: **Quantitative comparisons with state-of-the-art temporal methods on the 3DPW–XOCC dataset. Symbol \(*\) means finetuning on the 3DPW–XOCC train set.** Figure 8: **Detailed architecture of the temporal INN.** Figure 10: **Noise sensitivity analysis of analytical IK, MLP-based IK and NIKI.** Figure 9: **Acceleration error curve.** joint positions. As shown in Fig. 10, MLP-based IK is more robust than the analytical IK when the noise is larger than 30 mm. However, MLP-based IK fails to obtain pixel-aligned performance when the noise is small. NIKI shows superior performance at all noise levels. ## Appendix E Collision Analysis To quantitatively show that the output poses from NIKI are more plausible, we compare the collision ratio of mesh triangles [40] between HybrIK and NIKI on the 3DPW-XOCC dataset. NIKI reduces the collision ratio from 2.6% to 1.0% (57.7% relative improvement). ## Appendix F Occlusion Analysis We follow the framework of [67, 23] and replace the classification score with an error measure for body poses. We choose MPJPE as the error measurement. This analysis is not limited to a particular network architecture. We apply it to the state-of-the-art pixel-aligned approach, HybrIK [30], and the direct regression approach, PARE [23]. The visualizations of the error maps are shown in Fig. 12 and 13. Warmer colors denote a higher MPJPE. It shows that NIKI is more robust to body part occlusions. Additionally, we follow the official AGORA analyses to compare the performance in different occlusion levels. As shown in Fig. 11, in the low occlusion level (0-10%), NIKI brings \(6.5\) mm MPJPE improvement. The improvement reaches a peak (\(13.3\) mm) in the medium occlusion level (20-30%). For the high occlusion level (70-80%), the improvement falls back to \(10.2\) mm. We can observe that NIKI is good at handling medium occlusions. There is still a lot of room for improvement in highly occluded scenarios. ## Appendix G Heatmap Condition We follow Wehrbein [59] and add heatmap condition in the INN. As shown in Tab. 8, it brings \(0.2\) mm improvement on the 3DPW dataset. However, it is \(0.1\) mm worse on the 3DPW-XOCC dataset. We assume this is because heatmap is not reliable under server occlusions. ## Appendix H Inference Time and Model Size We benchmark the inference time of the analytical IK algorithm, HybrIK [30] and NIKI with an RTX 3090 GPU with a batch size of 1. The latency of HybrIK is \(26\) ms and NIKI is \(8\) ms, respectively. HybrIK is much slower since it needs to solve the rotations iteratively along the kinematic tree. For the model size, the total parameters of NIKI is 29.01M. ## Appendix I Details of 3DPW-XOCC 3DPW-XOCC is a new benchmark for human pose and shape estimation with extremely challenging occlusions and truncations. The dataset is augmented from the original 3DPW dataset by adding temporally-smooth synthetic occlusions and truncations. To ensure temporal smoothness, we choose keyframes at an interval of 8 frames, and the rest frames are generated by linearly interpolating the clipping and occlusion of the keyframes. In the keyframe, the image is randomly clipped to ensure that at least one body part is outside the clipped image with a possibility of over \(2/3\). A square area that takes up to 30% of the clipped image is replaced by gaussian noise to serve as occlusion. The evaluation protocol and the split of the dataset are unchanged. ## Appendix J Limitations and Future Work Our work has several limitations. First, NIKI does not include body shape refinement. Human body shape estimation is also challenging in occlusion scenarios. The incorrect body shape would cause incorrect distal joints reconstruction. For example, even the knee and ankle rotations are correct, the wrong leg length will cause a wrong ankle position. Exploiting the bone length information in joint positions can help refine \(\mathbf{\beta}\) for better pose and shape estimation. Second, NIKI does not use the scene information to separate the pose error. The initial joint positions could be physiologically plausible but do not match the input scene. Using scene constraints can reduce implausible human-scene interactions and further improve robustness. Third, the training of NIKI relies on the diversity of datasets. To accurately built the bijective mapping, the training data need to be diverse enough. We believe these limitations are exciting avenues for future work to explore. ## Appendix K Qualitative Results Additional qualitative results are shown in Fig. 14 and 15. Figure 12: **Occlusion Sensitivity Maps of PARE [23] and NIKI.** Figure 13: **Occlusion Sensitivity Maps of HybrIK [30] and NIKI.** Figure 14: **Qualitative comparison with PARE [23].** Figure 15: **Qualitative comparison with HybrIK [30].**
2308.06227
Comprehensive Benchmarking of Binary Neural Networks on NVM Crossbar Architectures
Non-volatile memory (NVM) crossbars have been identified as a promising technology, for accelerating important machine learning operations, with matrix-vector multiplication being a key example. Binary neural networks (BNNs) are especially well-suited for use with NVM crossbars due to their use of a low-bitwidth representation for both activations and weights. However, the aggressive quantization of BNNs can result in suboptimal accuracy, and the analog effects of NVM crossbars can further degrade the accuracy during inference. This paper presents a comprehensive study that benchmarks BNNs trained and validated on ImageNet and deployed on NeuroSim, a simulator for NVM-crossbar-based PIM architecture. Our study analyzes the impact of various parameters, such as input precision and ADC resolution, on both the accuracy of the inference and the hardware performance metrics. We have found that an ADC resolution of 8-bit with an input precision of 4-bit achieves near-optimal accuracy compared to the original BNNs. In addition, we have identified bottleneck components in the PIM architecture that affect area, latency, and energy consumption, and we demonstrate the impact that different BNN layers have on hardware performance.
Ruirong Huang, Zichao Yue, Caroline Huang, Janarbek Matai, Zhiru Zhang
2023-08-11T16:58:18Z
http://arxiv.org/abs/2308.06227v1
# Comprehensive Benchmarking of Binary Neural Networks on NVM Crossbar Architectures ###### Abstract Non-volatile memory (NVM) crossbars have been identified as a promising technology, for accelerating important machine learning operations, with matrix-vector multiplication being a key example. Binary neural networks (BNNs) are especially well-suited for use with NVM crossbars due to their use of a low-bitwidth representation for both activations and weights. However, the aggressive quantization of BNNs can result in suboptimal accuracy, and the analog effects of NVM crossbars can further degrade the accuracy during inference. This paper presents a comprehensive study that benchmarks BNNs trained and validated on ImageNet and deployed on NeuroSim, a simulator for NVM-crossbars-based PIM architecture. Our study analyzes the impact of various parameters, such as input precision and ADC resolution, on both the accuracy of the inference and the hardware performance metrics. We have found that an ADC resolution of 8-bit with an input precision of 4-bit achieves near-optimal accuracy compared to the original BNNs. In addition, we have identified bottleneck components in the PIM architecture that affect area, latency, and energy consumption, and we demonstrate the impact that different BNN layers have on hardware performance. Deep Learning, Binary Neural Network, Non-Volatile Memory, Processing-In-Memory ## I Introduction Non-volatile memory (NVM) crossbars have become popular as a type of analog computing substrate used in processing-in-memory (PIM) architectures [1, 2, 3, 4]. These crossbars consist of an array of NVM cells and perform matrix-vector multiplication (MxV) in the analog domain using Ohm's law and Kirchhoff's current law. By processing data in-situ, the need for data movement is reduced, resulting in increased performance and energy efficiency. This approach is gaining traction in hardware acceleration for machine learning, where MxV is a critical computational kernel. However, supporting high-precision neural networks in NVM crossbars can be challenging because of the need for digital-to-analog converters (DAC) and analog-to-digital converters (ADC) to convert signals from the digital domain to the analog domain for processing and vice versa. High-precision neural networks require high-resolution DAC and ADC, which consume more energy and can offset the benefits of in-situ computing. As a result, researchers often resort to quantization techniques to reduce precision while maintaining acceptable accuracy levels, making quantized neural networks a more viable option for NVM crossbars [5, 6, 7]. Binary neural networks (BNNs) are a type of quantized neural network that uses a single bit to represent weights and activations [8, 9, 10, 11, 12, 13]. The extreme quantization of BNNs can lower both computational complexity and storage requirements. When deployed in NVM crossbars, BNNs require only a simple inverter for input activation, and the shift-and-add circuit commonly used for weight bit-slicing can be eliminated, leading to greater energy efficiency. This synergy between BNNs and NVM crossbars creates a promising scenario for deploying many emerging applications at the edge settings with strict resource and latency constraints. The synergy between BNNs and NVM crossbars creates a promising scenario for deploying a variety of emerging ML workloads at the edge with strict resource and latency constraints. While deploying BNNs in NVM crossbars holds potential benefits, it is crucial to pay close attention to the possible negative effects. BNNs typically have lower inference accuracy due to their aggressive quantization nature compared to high-precision counterparts. The analog effects of NVM crossbars, such as insufficient ADC resolution and memory non-idealities, can further decrease the inference accuracy. Additionally, the configuration of BNNs, including model structure and input precision, can significantly impact the hardware performance of NVM crossbars. Improper choices of the model configuration can offset the advantages of energy efficiency gained by using the NVM crossbar. Several simulators have been proposed for NVM-crossbar-based neural network accelerators [14, 15, 16, 17, 18, 19] to facilitate fast design space exploration and performance evaluation. These simulators typically encompass different levels of hardware modeling, including memory, circuit, and chip, with varying degrees of fidelity. For instance, NeuroSim [19] provides detailed circuit-level modeling, while MNSIM [18] offers more flexible chip-level exploration options. Previous studies have utilized these simulators to explore the potential of NVM crossbars for BNN inference, primarily focusing on simpler BNNs [20, 21, 12] and small datasets such as MNIST [21, 22] and CIFAR-10 [23, 24, 25, 12]. However, a comprehensive benchmarking study is still lacking to investigate the effectiveness of NVM crossbars on more complex BNN models using realistic datasets. Such a benchmark could provide valuable insights into the performance bottleneck of NVM crossbar hardware components and which BNN features are best suited for use in NVM crossbars. In this paper, we use the open-source simulator NeuroSim [19] to conduct a comprehensive benchmark of modern BNNs implemented on NVM-crossbar-based PIM architecture. The contributions of this paper are as follows. * This study provides the first comprehensive benchmarking of three realistic BNN models (BAlexNet, BResNet, and BDenseNet) on an NVM-crossbar-based PIM architecture, trained and validated on the ImageNet dataset. We analyze the impact of various parameters, including input precision and ADC resolution, on both inference accuracy and chip performance, such as chip area, latency, energy consumption, and throughput. * We analyze the relationship between inference accuracy, input precision, and ADC resolution, and demonstrate that increasing the ADC resolution and input precision can generally result in higher accuracy but lower throughput. Our study shows that an ADC resolution of 8-bit with an input precision of 4-bit offers a negligible accuracy loss compared to the same BNN model evaluated on a GPU with float32 first-layer activation, while achieving a better energy efficiency trade-off. * We identify the hardware components that bottleneck the performance of the NVM crossbar architecture. Specifically, our work demonstrates that the area of the ADC grows exponentially with increasing resolution, becoming the dominant part of the chip in terms of area. Additionally, we find that buffer latency and interconnection dynamic energy are major contributors to latency and energy consumption, respectively. * We demonstrate how different model structures impact inference accuracy and hardware performance. Our findings reveal that for deeper models, the pipelined architecture of the NVM accelerator can significantly reduce latency, but large fully-connected layers become the bottleneck of chip performance, especially in terms of latency. ## II Background ### _Binary Neural Network_ Binary Neural Network [8, 9, 10, 11, 12, 13] is a type of neural network with binary weights and activations. In a BNN, each weight and activation value is either +1 or -1, which takes only 1 bit of memory, rather than a floating point or integer value, which takes 32 or 16 bits. The use of binary weights and activations in BNNs offers several advantages over traditional neural networks, including reduced memory requirements, faster processing speed, and improved energy efficiency. Additionally, BNNs have been shown to be more robust to noisy inputs and adversarial attacks compared to traditional neural networks. This is due to the reduced sensitivity of binary values to small perturbations in the input. However, BNNs also have some limitations compared to traditional neural networks. The binary weights and activations result in a loss of expressiveness and accuracy compared to their higher bit-precision counterparts. Additionally, training BNNs can be challenging, as the non-differentiable binary activation function can make it difficult to perform back-propagation and update the weights. ### _Processing-in-Memory_ Processing-in-Memory (PIM) architecture is a computer architecture that endows the memory module with computing capability [26, 27, 28]. One of the main benefits of PIM is reduced data movement and communication overhead. In traditional architectures, data must be transferred from memory to the processor for processing, which significantly increases the energy consumption and latency. PIM eliminates this transfer, as the data is processed directly in the memory, hence improving performance and energy efficiency. In this paper, we focus on an emerging PIM device known as the NVM crossbar. #### Ii-B1 Embedded Non-volatile Memory Non-volatile memory (NVM) is a type of memory that can retain data even when power is not supplied. Several emerging NVM technologies, such as resistive random access memory (RRAM), phase change random access memory (PCM), and magnetoresistive random access memory (MRAM), offer comparable or shorter latency compared to dynamic random access memory (DRAM), along with higher density than DRAM and static random access memory (SRAM) [29]. Embedded non-volatile memory (eNVM) specifically refers to NVM integrated into a system-on-chip (SoC) or an application-specific integrated circuit (ASIC) [30, 31, 32]. In addition to the advantages of NVM, eNVM can be manufactured using standard CMOS processes, making it relatively low-cost and easy to integrate into existing designs. These benefits make eNVM-based crossbar an attractive option for ML workloads. In this paper, we focus on RRAM, which uses material that can be switched between two or multiple resistance states to store one or more bits of information [33, 4, 32]. We use the 22nm RRAM model provided in NeuroSim for our benchmark. #### Ii-B2 NVM Crossbar NVM crossbar [1, 2, 3, 4] is a type of PIM architecture that enables parallel processing and data storage by integrating memory and computation functions within a single device. In the crossbar architecture, the memory cells are arranged in a two-dimensional array of rows and columns, and each cell is connected to both a _wordline_ and a _bitline_. One of the main advantages of a crossbar architecture is its ability to perform massively parallel operations, which can greatly accelerate computation and reduce power consumption. In addition, the crossbar can be implemented using a variety of NVM technologies, such as RRAM, PCM or MRAM, which can provide both high-density and low-power storage. Matrix-vector multiplication is usually the dominant computation in ML workloads, which can be done in parallel through crossbar devices [21, 34, 2] with each crosspoint storing one weight. With the input activations applied as voltage to horizontal lines, the vector-matrix multiplication is automatically down through the principles of Ohm's law and Kirchhoff's current law. By sensing the amplitude of the output current at each vertical line, we can read out the vector-matrix multiplication results in parallel. #### Ii-B3 NeuroSim NeuroSim is a simulation platform designed for modeling and analyzing the performance of deep neural network (DNN) accelerators adopting NVM-crossbar-based architectures [20, 22, 23]. NeuroSim provides hardware models that cover different levels of abstraction, including memory, circuit, and chip architecture. Compared to other open-source simulators such as CrossSim [14], PytorX [15] and MNSIM [18], NeuroSim offers more detailed circuit-level models, allowing for accurate estimation of inference accuracy and hardware performance, including area, latency, and energy consumption. However, it is important to note that one limitation of NeuroSim is that the accelerator floorplans generated for different neural networks are model-specific. ## III Benchmark Design In this section, we present our benchmark design, which involves three BNN models: BAlexnet, BResnet18, BDensenet28, each with two focuses: inference accuracy and hardware performance. First, we introduce the models used in our experiments, and then we detail our benchmark design and methodology. ### _Models_ NeuroSim provides a PyTorch wrapper for simulating the inference accuracy of different neural network models. We build BAlexNet, BResNet18, and BDenseNet28 in PyTorch referring to their TensorFlow-version from Larq Zoo [35]. #### Iii-A1 AlexNet AlexNet [36] is a convolutional neural network (CNN) architecture consisting of five convolution layers and three large fully-connected layers. The convolution layers employ a combination of filters of varying sizes and strides, which are then followed by max-pooling layers. The fully-connected layers are succeeded by a softmax layer that produces the class probabilities. #### Iii-A2 ResNet ResNet, [37] short for "Residual Network", is a CNN architecture that introduces residual connections, which allow information to be passed directly from one layer to another. This is done by adding the output of a previous layer to the input of a subsequent layer, allowing the network to learn with a much deeper size and produce better accuracy. Various versions of ResNet exist with differing numbers of layers. In our benchmark, we employ ResNet18. #### Iii-A3 DenseNet DenseNet, [38] short for "Densely Connected CNN", is a deep neural network architecture that introduces densely connected blocks, which are made up of several convolution layers with a fixed number of filters. Each layer receives the feature maps of all preceding layers as inputs, concatenated together along the depth dimension. This dense connectivity pattern allows the network to reuse features learned by earlier layers, which results in better parameter efficiency and gradient flow. In our benchmark, we utilize DenseNet28. ### _Inference Accuracy_ Accuracy is one of the biggest concerns in BNNs, while model inference is the most common use case for edge machine learning devices. In order to get realistic results of inference accuracy for BNN models on large datasets, we first train all three models on ImageNet using binary weights for all layers and binary activations for all but the first layer. Next, we evaluate the trained model using the NeuroSim simulator by replacing key functions such as nn.conv2d with APIs provided by NeuroSim that account for hardware effects. However, since NeuroSim is not specifically designed for BNNs, it employs a high-precision floating-point input for the first layer, with bit-serialization modules used to process activations in the subsequent layers. To better simulate BNNs, we introduce a bit-serialization module to process the input of the first layer and eliminate this module from all subsequent layers, given that the activations of these layers are already binarized. We also employ dynamic quantization of input data. To further investigate, we focus on two parameters with the most significant impact on accuracy: first-layer input precision and the resolution of ADCs. ### _Hardware Performance_ To quantify the performance of BNNs deployed in NVM crossbar architectures, we first provide layer-level model structures for NeuroSim, since it's a model-specific simulator. Each layer includes the dimension of its input activation and weight, as well as parameters for any potential subsequent pooling and activation module (i.e. relu). Furthermore, we collect the intermediate activation and weight data for each layer from the trained model, as this information is crucial for NeuroSim to generate more precise estimates based on the distribution of each layer's input activation and weight. NeuroSim offers different modes catering to distinct design considerations. In our benchmark, we select the _XnorParallel_ mode, which employs two memory cells to represent a weight of either +1 or -1, making it suitable for BNNs. Moreover, we activate the pipeline mode in NeuroSim to leverage the pipelined architecture of the chip, thereby achieving enhanced throughput. To demonstrate the impact of ADC resolution on both accuracy and hardware performance, we perform tests on each model using various ADC resolutions and collect their hardware performance data, including chip area, latency, energy consumption, throughput, and efficiency. ## IV Experiments This section provides a detailed explanation of our experiment parameters and results, divided into four parts: inference accuracy, area, latency, and energy, each with four subsections corresponding to the three BNN models. Additionally, there is a Put-It-All-Together section that compares the results of the three models. Finally, we analyze the throughput and efficiency of the three models. ### _Inference Accuracy_ #### Iv-A1 AlexNet Table I presents the inference accuracy for Binary AlexNet across different ADC resolutions (3-8) and first-layer input precisions (1-8). The same data is used to generate Figure 1 and Figure 2, which visually depict the trend of accuracy change. In Figure 1, the accuracy for input precision of 1 increases with ADC resolution but remains low at the highest precision (7.55% for ADC resolution=8). Accuracy improves significantly with an input precision of 2, reaching a maximum of 30.71%. With an input precision of 3, accuracy reaches an asymptote, and increasing input precision beyond 3 has little effect on accuracy. Figure 2 also shows that accuracy monotonically increases with ADC resolution, although this trend stops when ADC resolution reaches 7. Overall, input precision of 4-bit with ADC resolution of 8-bit provides an acceptable accuracy of 35.60%, which is only 0.8% lower than the maximum accuracy of 35.9% when both parameters are set to 8, and is only 1.4% lower than the accuracy of 36.11% achieved on GPU with float32 first-layer precision. #### Iv-C2 ResNet Table II lists the inference accuracy according to different ADC resolutions (3-8) and first-layer input precisions (1-8) for Binary ResNet. The same data is used to draw Figure 3 and Figure 4. Similar to the results of AlexNet, in Figure 3, the accuracy is still quite low when input precision is 1. It gets a great improvement when input precision increases to 2, where the max accuracy reaches 39.72%. \begin{table} \begin{tabular}{c|c c c c c c} \hline \hline \multirow{2}{*}{ \begin{tabular}{c} **Input** \\ **Precision** \\ \end{tabular} } & \multicolumn{5}{c}{**ADC Resolution**} \\ 2 & 0.60 & 15.23 & 18.82 & 38.60 & 36.08 & 39.72 \\ 3 & 0.55 & 22.35 & 24.17 & 45.86 & 48.36 & 51.81 \\ 4 & 0.52 & 25.08 & 25.67 & 46.79 & 51.08 & 53.61 \\ 5 & 0.70 & 26.04 & 26.04 & 46.89 & 51.60 & 53.98 \\ 6 & 0.79 & 26.32 & 26.61 & 47.19 & 51.93 & 53.81 \\ 7 & 0.65 & 26.51 & 26.68 & 47.08 & 52.02 & 53.82 \\ 8 & 0.77 & 26.58 & 26.42 & 47.00 & 51.88 & 53.84 \\ \hline \hline \end{tabular} \end{table} TABLE II: **Inference accuracy for Binary ResNet with different input precisions and ADC resolutions — The inference accuracy on GPU with float32 first-layer activation is 54.02%.** \begin{table} \begin{tabular}{c|c c c c c c} \hline \hline \multirow{2}{*}{ \begin{tabular}{c} **Input** \\ **Precision** \\ \end{tabular} } & \multicolumn{5}{c}{**ADC Resolution**} \\ 2 & 0.17 & 1.43 & 3.64 & 6.07 & 7.67 & 7.55 \\ 3 & 0.21 & 4.35 & 13.81 & 29.30 & 34.84 & 35.60 \\ 4 & 0.16 & 4.95 & 15.51 & 30.76 & 35.57 & 35.76 \\ 5 & 0.15 & 5.18 & 15.96 & 31.06 & 35.52 & 36.11 \\ 6 & 0.18 & 5.24 & 16.00 & 30.99 & 35.54 & 35.84 \\ 7 & 0.29 & 5.31 & 16.18 & 31.28 & 35.43 & 35.90 \\ 8 & 0.31 & 5.31 & 16.31 & 31.22 & 35.56 & 35.90 \\ \hline \hline \end{tabular} \end{table} TABLE I: **Inference accuracy for Binary AlexNet with different input precisions and ADC resolutions — The inference accuracy on GPU with float32 first-layer activation is 36.11%.** Fig. 1: **Binary AlexNet inference accuracy vs. ADC resolution — Each line corresponds to a different first-layer input precision.** Fig. 3: **Binary ResNet inference accuracy vs. ADC resolution — Each line corresponds to a different first-layer input precision.** Fig. 2: **Binary AlexNet inference accuracy vs. first layer input precision — Each line corresponds to a different ADC resolution.** Overall, input precision of 4-bit with ADC resolution of 8-bit provides an acceptable accuracy of 53.61%, which is only 0.4% lower than the maximum accuracy of 53.84% when both parameters are set to 8, and is only 0.7% lower than the accuracy of 54.02% achieved on GPU with float32 first-layer precision. #### Iv-A3 DenseNet Table III presents the inference accuracy for Binary DenseNet across different ADC resolutions (3-8) and first-layer input precisions (1-8). Similar to the result of ResNet, the accuracy when input precision is 1 is still quite low. It gets great improvement when input precision increases to 2, where the max accuracy reaches 40.85%. Overall, input precision of 4-bit with ADC resolution of 8-bit provides an acceptable accuracy of 55.88%, which is only 0.08% lower than the maximum accuracy of 55.93% when both parameters are set to 8, and is only 0.5% lower than the accuracy of 56.15% achieved on GPU with float32 first-layer precision. #### Iv-A4 Put It All Together The maximum accuracy achieved by the three models exhibits a monotonic increase: 35.90% for AlexNet, 53.84% for ResNet, and 55.93% for DenseNet. Although the values differ, the peak accuracy for each model is attained when the input precision and ADC resolution are set to their maximum values (8-bit). Overall, input precision of 4-bit with ADC resolution of 8-bit provides negligible accuracy loss compared to the same BNN model evaluated on GPU with float32 first-layer activation. ### _Area_ #### Iv-B1 AlexNet Figure 5 displays the chip area and its components as a function of ADC resolution for Binary AlexNet. It is evident that the total area of the ADC increases at the fastest rate as the ADC resolution increases, and this increase is the primary reason for the growth in the chip area. This is reasonable because higher ADC resolution necessitates a larger ADC circuit area. The total accumulation circuits on the chip, although having the lowest value among these components, increased by 76% from 1.05 \(mm^{2}\) to 1.85 \(mm^{2}\). The other peripheries, which occupy most of the chip area when the ADC resolution is low, have only a small increase, resulting in a much smaller portion compared to the ADC area when the ADC resolution is 8. The CIM (Computing-in-Memory) array area, which is for the crossbar, remains exactly constant since it is not impacted by ADC circuits. As a result, the interconnect circuits (IC) area, although initially smaller than the CIM array, becomes larger than it when the ADC resolution is greater than 7. #### Iv-B2 ResNet The total ADC area and chip area exhibit a similar trend as that of AlexNet, but with some notable Fig. 4: **Binary ResNet inference accuracy vs. first layer input precision — Each line corresponds to a different ADC resolution.** Fig. 5: **Chip area breakdown for Binary AlexNet with different ADC resolutions — Chip means the total chip area. CIM, IC, ADC, Acc means CIM array area, interconnect circuits area, ADC area, and accumulation circuits area on chip respectively. Other means other peripheries (e.g. decoders, mux, switch matrix, buffers, pooling, and activation units) on the chip. Note that the figure is in log scale.** \begin{table} \begin{tabular}{c|c c c c c c} \hline \hline **Input** & \multicolumn{6}{c}{**ADC Resolution**} \\ **Precision** & 3 & 4 & 5 & 6 & 7 & 8 \\ \hline 1 & 0.24 & 1.89 & 3.34 & 5.76 & 5.14 & 5.40 \\ 2 & 0.75 & 22.65 & 20.64 & 32.20 & 40.67 & 40.85 \\ 3 & 0.91 & 32.98 & 26.39 & 42.91 & 54.05 & 54.15 \\ 4 & 0.84 & 34.82 & 27.04 & 45.49 & 55.42 & 55.88 \\ 5 & 0.85 & 35.22 & 27.56 & 46.19 & 55.52 & 56.22 \\ 6 & 1.04 & 35.41 & 27.78 & 46.58 & 55.44 & 56.18 \\ 7 & 0.85 & 36.07 & 27.47 & 46.49 & 55.58 & 56.04 \\ 8 & 0.95 & 35.92 & 27.72 & 46.37 & 55.68 & 55.93 \\ \hline \hline \end{tabular} \end{table} TABLE III: **Inference accuracy for Binary DenseNet with different input precisions and ADC resolutions — The inference accuracy on GPU with float32 first-layer activation is 56.15%.** differences. The total CIM array area remains constant, which is roughly half the size of AlexNet's CIM array (2.34 \(mm^{2}\) vs. 4.32 \(mm^{2}\)), despite ResNet having more layers. This is due to AlexNet having three large dense layers (9216 * 4096, 4096 * 4096, 4096 * 1000) at the end, which requires more parameters to be stored in the CIM array. In contrast, ResNet only has one smaller dense layer (512 * 1000) at the end, resulting in fewer parameters to be stored in the CIM array. Consequently, the total IC area of ResNet is larger than the CIM array area from the start when ADC resolution is 3. #### Iv-B3 DenseNet Despite having a more intricate and deeper network architecture than ResNet, DenseMet exhibits a larger CIM array area of 4.54 \(mm^{2}\), which is almost twice the size of ResNet's 2.34 \(mm^{2}\) and slightly larger than AlexNet's 4.32 \(mm^{2}\). The total accumulation circuits and other periphery areas scale similar to the other models, yet the total IC area remains only marginally larger than that of ResNet across all ADC resolution levels (2.66 \(mm^{2}\) vs. 2.52 \(mm^{2}\) for ADC resolution = 3 and 8.66 \(mm^{2}\) vs. 8.20 \(mm^{2}\) for ADC resolution = 8). Consequently, the IC area is initially smaller than the CIM array area and surpasses it only when the ADC resolution exceeds 6. #### Iv-B4 Put It All Together Table IV lists the precise values of chip area as well as its components as a function of ADC resolution for AlexNet, ResNet, and Densenet. Due to three large dense layers at the end of AlexNet, the chip area of AlexNet, although still smaller than DenseNet, is larger than ResNet, despite the fact that AlexNet is much shallower than the other two. The CIM array area shows a similar pattern, as ResNet is the smallest, and DenseNet is the largest. Total ADC area is the main factor of increase in chip area for all three models when ADC resolution grows. Similar for all three models, the ADC area grows exponentially (69x) with ADC resolution, and its portion on the chip grows from 13% to 89%. Total IC area, on the other hand, although increasing with ADC resolution, has a very similar value among the three models, which means the area of IC is almost independent of the model structure. ### _Latency_ #### Iv-C1 AlexNet The per-image inference latency and their components as functions of ADC resolution for Binary AlexNet are shown in Figure 6. The main contributor to the total inference latency is the buffer read latency, which increases from 8.05 \(ms\) to 12.1 \(ms\), resulting in a similar increasing trend in the overall latency. Although the buffer read latency is larger than 8 ms even at the beginning, the second major component, IC read latency, \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline **BNN** & **ADC** & **Chip area** & **Total CIM array** & **Total IC area** & **Total ADC area** & **Total accumulation** & **Other peripheries** \\ **model** & **resolution** & (\(mm^{2}\)) & (\(mm^{2}\)) & (\(mm^{2}\)) & (\(mm^{2}\)) & **circuits** (\(mm^{2}\)) & (\(mm^{2}\)) \\ \hline AlexNet & 3 & 55.97 & 4.32 & 2.54 & 7.40 & 1.05 & 40.65 \\ & 4 & 64.92 & 4.32 & 2.74 & 15.87 & 1.21 & 40.78 \\ & 5 & 82.62 & 4.32 & 3.10 & 32.79 & 1.37 & 41.04 \\ & 6 & 130.04 & 4.32 & 3.90 & 78.56 & 1.53 & 41.72 \\ & 7 & 236.97 & 4.32 & 5.29 & 182.39 & 1.69 & 43.27 \\ & 8 & 573.53 & 4.32 & 8.26 & 510.94 & 1.85 & 48.15 \\ \hline ResNet & 3 & 32.73 & 2.34 & 2.52 & 4.02 & 0.57 & 23.28 \\ & 4 & 37.67 & 2.34 & 2.72 & 8.60 & 0.65 & 23.35 \\ & 5 & 47.43 & 2.34 & 3.08 & 17.78 & 0.74 & 23.49 \\ & 6 & 73.51 & 2.34 & 3.87 & 42.60 & 0.83 & 23.86 \\ & 7 & 132.11 & 2.34 & 5.25 & 98.90 & 0.91 & 24.70 \\ & 8 & 315.95 & 2.34 & 8.20 & 277.06 & 1.00 & 27.35 \\ \hline DenseNet & 3 & 62.73 & 4.54 & 2.66 & 7.77 & 1.08 & 46.68 \\ & 4 & 72.13 & 4.54 & 2.87 & 16.65 & 1.25 & 46.82 \\ & 5 & 90.70 & 4.54 & 3.25 & 34.41 & 1.41 & 47.09 \\ & 6 & 140.45 & 4.54 & 4.09 & 82.43 & 1.58 & 47.81 \\ & 7 & 252.65 & 4.54 & 5.55 & 191.38 & 1.75 & 49.43 \\ & 8 & 605.80 & 4.54 & 8.66 & 536.12 & 1.91 & 54.56 \\ \hline \hline \end{tabular} \end{table} TABLE IV: **Chip Area breakdown for different BNN models with different ADC resolutions — Accumulation circuits include adders, shift/Adds on the subarray level, and includes accumulation units on the PE/Tile/Global level. Other peripheries include decoders, mux, switch matrix, buffers, pooling, activation units, etc.**
2307.03280
Neural network decoder for near-term surface-code experiments
Neural-network decoders can achieve a lower logical error rate compared to conventional decoders, like minimum-weight perfect matching, when decoding the surface code. Furthermore, these decoders require no prior information about the physical error rates, making them highly adaptable. In this study, we investigate the performance of such a decoder using both simulated and experimental data obtained from a transmon-qubit processor, focusing on small-distance surface codes. We first show that the neural network typically outperforms the matching decoder due to better handling errors leading to multiple correlated syndrome defects, such as $Y$ errors. When applied to the experimental data of [Google Quantum AI, Nature 614, 676 (2023)], the neural network decoder achieves logical error rates approximately $25\%$ lower than minimum-weight perfect matching, approaching the performance of a maximum-likelihood decoder. To demonstrate the flexibility of this decoder, we incorporate the soft information available in the analog readout of transmon qubits and evaluate the performance of this decoder in simulation using a symmetric Gaussian-noise model. Considering the soft information leads to an approximately $10\%$ lower logical error rate, depending on the probability of a measurement error. The good logical performance, flexibility, and computational efficiency make neural network decoders well-suited for near-term demonstrations of quantum memories.
Boris M. Varbanov, Marc Serra-Peralta, David Byfield, Barbara M. Terhal
2023-07-06T20:31:25Z
http://arxiv.org/abs/2307.03280v2
# Neural network decoder for near-term surface-code experiments ###### Abstract Neural-network decoders can achieve a lower logical error rate compared to conventional decoders, like minimum-weight perfect matching, when decoding the surface code. Furthermore, these decoders require no prior information about the physical error rates, making them highly adaptable. In this study, we investigate the performance of such a decoder using both simulated and experimental data obtained from a transmon-qubit processor, focusing on small-distance surface codes. We first show that the neural network typically outperforms the matching decoder due to better handling errors leading to multiple correlated syndrome defects, such as \(Y\) errors. When applied to the experimental data of [Google Quantum AI, Nature 614, 676 (2023)], the neural network decoder achieves logical error rates approximately 25% lower than minimum-weight perfect matching, approaching the performance of a maximum-likelihood decoder. To demonstrate the flexibility of this decoder, we incorporate the soft information available in the analog readout of transmon qubits and evaluate the performance of this decoder in simulation using a symmetric Gaussian-noise model. Considering the soft information leads to an approximately 10% lower logical error rate, depending on the probability of a measurement error. The good logical performance, flexibility, and computational efficiency make neural network decoders well-suited for near-term demonstrations of quantum memories. + Footnote †: Corresponding author: [email protected] ## I Introduction Quantum computers are anticipated to outperform classical computers in solving specific problems, such as integer factorization [1] and quantum simulation [2]. However, for a quantum computer to perform any meaningful computation, it has to be able to execute millions of operations, requiring error rates per operation lower than \(10^{-10}\)[3; 4]. Despite a valiant experimental effort aimed at enhancing operational performance, state-of-the-art processors typically exhibit error rates per operation around \(10^{-3}\)[5; 6; 7; 8; 9; 10; 11; 12; 13; 14], which is far from what is needed to perform any useful computation. Fortunately, quantum error correction (QEC) provides a means to reduce the error rates, albeit at the cost of additional overhead in the required physical qubits [15; 16; 17]. Two-dimensional stabilizer codes [18], such as the surface codes [19], have emerged as a prominent approach to realizing fault-tolerant computation due to their modest connectivity requirements and high tolerance to errors [20; 21; 22]. These codes encode the logical information into an array of physical qubits, referred to as data qubits. Ancilla qubits are used to repeatedly measure parities of sets of neighboring data qubits. Changes between consecutive measurement outcomes, which are typically referred to as syndrome defects, indicate that errors have occurred. A classical decoder processes this information and aims at inferring the most likely correction. The increased number of available qubits [23; 24; 25; 26] and the higher fidelities of physical operations [5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 27; 15; 16; 17; 18; 19; 20; 21; 22; 23; 24; 27; 28; 29; 30; 31; 32] in modern processors have enabled several experiments employing small-distance codes to demonstrate the capacity to detect and correct errors [33; 34; 35; 36; 37; 38; 39; 40; 41; 25; 26; 27; 28; 29; 30; 32; 31; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45]. In a recent milestone experiment, the error rate per QEC round of a surface-code logical qubit was reduced by increasing the code distance [25], demonstrating the fundamental suppression achieved by QEC. The performance of the decoder directly influences the performance of a QEC code. Minimum-weight perfect matching (MWPM) is a good decoding algorithm for the surface code, which is computationally efficient and, therefore, scalable [46; 47; 48; 49; 20]. Its good performance is ensured under the assumption that the errors occurring in the experiment can be modeled as independent \(X\) and \(Z\) errors [20]. This leads to the MWPM decoder performing worse than decoders based on belief propagation [50; 51; 52; 53] or a (more computationally-expensive) approximate maximum-likelihood decoder based on tensor-network (TN) contraction [54; 55]. A more practical concern is that a decoder relies on a physical error model to accurately infer the most likely correction. Typically, this requires constructing an approximate model and a series of benchmarking experiments to extract the physical error rates. While there are methods to estimate the physical error rates based on the measured defects [56; 57; 25; 38], they typically ignore nonconventional errors like crosstalk or leakage. The presence of these errors can impact both the accuracy with which the physical error rates are estimated from the data and the performance of the decoder itself [57]. An alternative approach to decoding is based on using neural networks (NN) to infer the most likely correction given a set of measured defects [58; 59; 60; 61; 62; 63; 64; 65; 66; 67; 68; 69]. These decoders do not require any prior information about the error model and therefore alleviate the need to construct any error model, making them highly adaptable. This flexibility comes at the cost of requiring a significant amount of data for training the network and optimizing the hyper-parameters to ensure that the optimal performance of the decoder is reached during training. Despite the potential issues during the training, it has been shown that they can match and generally exceed the performance of MWPM decoders, in several cases achieving near-optimal performance [61, 63]. Depending on the NN architecture employed, these decoders can be scalable and run in real time [65, 75, 76, 77, 78]. While decoders based on recurrent NNs are more computationally expensive, they enable the decoding of experiments performing a variable number of stabilizer measurement rounds [61, 63, 68], making them well-suited for decoding near-term memory [61] and stability experiments [80]. In this work, we assess the performance of a neural-network decoder using both simulated and experimental data. Our work goes beyond [61] and previous NN decoding works in applying and partially training a NN decoder for the first time on data from a surface-code experiment [25], thus capturing realistic performance and showing the versatility of NN decoders. In addition, we go beyond [61] in training the NN decoder for a distance-7 surface code and extract its exponential error suppression factor on simulated data. Thirdly, we show that our NN decoder can be trained with (simulated) soft measurement data and get a performance enhancement. We begin by simulating the performance of a \(d=3\) surface code using a circuit-level noise model to show that the NN decoder outperforms MPWM by learning to deal with \(Y\) errors, as previous studies have suggested [61]. Next, we investigate the performance of the NN decoder when applied to data from a recent surface code experiment [25]. Due to the limited volume of available experimental data, we train the NN decoder on simulated data generated using an error model based on the measured physical error rates. However, we evaluate the decoder's performance on simulated and experimental data. The NN decoder significantly outperforms MWPM when decoding simulated data and achieves a lower logical error rate for the \(d=5\) code than the constituent \(d=3\) codes. When evaluated on experimental data, the NN decoder achieves a performance approaching that of a tensor-network decoder, which approximates a maximum-likelihood decoder. However, contrary to the finding in [25], the logical error rate observed in the \(d=5\) experiment is higher than the average of each of the \(d=3\) experiments, which we attribute to either a sub-optimal choice of hyper-parameters or the mismatch between the simulated data that the decoder was trained on and the experimental data. To further explore the performance of NNs, we consider the continuous information available in the measurement outcomes of transmon qubits [81, 82], typically referred to as soft information [83]. By calculating the defect probabilities given the soft outcomes and providing them to the neural network during training and evaluation, we demonstrate that the soft decoder can achieve an approximately 10% lower logical error rate if the measurement error probability is sufficiently high. ## II Background ### The surface code A (rotated) surface code encodes a single logical qubit into a two-dimensional array of \(n=d\times d\) physical qubits, referred to as data qubits, where \(d\) is the distance of the code. The logical state of the qubit is determined by the stabilizers of the code, which are the weight-four or weight-two \(X\)-type (blue plaquettes) or \(Z\)-type (green plaquettes) Pauli operators, see Figure 1. In addition to the stabilizers, the code is given by a pair of anticommuting logical operators, \(X_{L}\) and \(Z_{L}\), which commute with the code stabilizers. The stabilizers are typically measured indirectly with the help of \(n-1\) ancilla qubits. To perform this measurement, each ancilla coherently interacts with its neighboring data qubits in a specific order [84], after which the ancilla qubit is measured and reset. The stabilizer measurement outcomes are typically referred to as the syndromes and hold information about the errors that have occurred. The full circuits used to perform these measurements are shown in Figure 8. In particular, we use the circuits used in [25], which feature several echo gates used for dynamical decoupling in the experiment, see Section VI.1 for additional details. To characterize the performance of the code, we perform a series of logical memory experiments. In each experiment, the physical qubits are prepared in an eigenstate of either the \(X_{L}\) (resp. \(Z_{L}\)) logical operator, after which \(N-1\) rounds of stabilizer measurements are exe Figure 1: **a** Schematic of a distance \(d=3\) surface-code logical qubit, where 9 data qubits (white circles) store the logical information. 8 ancilla qubits (blue and green circles) are used to measure the \(Z\)-type (green plaquettes) and \(X\)-type (blue plaquettes) stabilizers of the code. Examples of the \(X_{L}\) (yellow) and \(Z_{L}\) (red) logical operators of the code. **b** Illustration of the \(Z\)-type plaquette (left, green) and \(X\)-type (right, blue) plaquette corresponding to the \(ZZZZ\) and \(XXXX\) stabilizer operators measured by each ancilla qubit. cuted. The experiment is concluded by reading out each data qubit in the \(X\) (resp. \(Z\) basis), which also performs a logical \(X_{L}\) (resp. \(Z_{L}\)) measurement. The goal of each experiment is to maintain the logical state for as many QEC rounds as possible by using error correction, see Section VI.1 for more details. The information about errors is contained in the stabilizer measurement outcome \(m_{r,a}\) of ancilla \(a\) at round \(r\). The final data qubit measurements can also be used to infer a final set of outcomes \(m_{r=N,a}\) for either the \(X\)-type or \(Z\)-type stabilizers. The defects \(d_{r,a}=m_{r,a}\oplus m_{r-1,a}\) isolate the changes in \(m_{r,a}\) such that an error is signaled by an observation of one or more \(d_{r,a}=1\). The choice of initial state and the dynamical decoupling gates can also flip some of the measured \(m_{r,a}\), which is accounted for when calculating \(d_{r,a}\). A decoder processes the observed \(d_{r,a}\) to infer a correction for the measured logical observable. By repeating each experiment many times, we extract the probability of a logical error \(p_{L}\left(r\right)\) at QEC round \(r\), from which we calculate the logical fidelity \(F_{L}\left(r\right)=1\)-\(2p_{L}\left(r\right)\), which decays exponentially with the number of executed QEC rounds. We model this decay as \(F_{L}\left(r\right)=\left(1\)-\(2\varepsilon_{L}\right)^{r-r_{0}}\), where \(\varepsilon_{L}\) is the logical error rate per QEC round and \(r_{0}\) is a fitting constant. When fitting the decay of \(F_{L}\left(r\right)\) to extract \(\varepsilon_{L}\), we start the fit at \(r=3\) to avoid any time-boundary effects that might impact this estimate. ### Error models To explore the performance of the NN decoder, we perform simulations using circuit-level Pauli-noise models. For most of our simulations, we consider a depolarizing circuit-level noise, which is defined as 1. After each single-qubit gate or idling period, with a probability \(p/3\), we apply an error drawn from \(\{X,Y,Z\}\). 2. After each two-qubit gate, with a probability \(p/15\), we apply an error drawn from \(\{I,X,Y,Z\}^{\otimes 2}\backslash\{II\}\). 3. With a probability \(p\), we apply an \(X\) error before each measurement. 4. With a probability \(p\), we apply an \(X\) error after each reset operation or after the qubits are first prepared at the start of an experiment. In some of our simulations, we consider noise models that are biased to have a higher or a lower probability of applying \(Y\) errors. To construct this model, we define a Y-bias factor \(\eta\) and modify the standard depolarizing circuit-level noise model, as follows: 1. After each single-qubit gate or idling period, there is a probability \(\eta p/(\eta+2)\) to apply a \(Y\) error and a probability \(p/(\eta+2)\) to apply an \(X\) or a \(Z\) error. 2. After each two-qubit gate, there is a probability \(\eta p/(7\eta+8)\) of applying an error drawn from \(\mathcal{P}_{B}=\{IX,XY,YI,YX,YY,YZ,ZY\}\) and a probability \(p/(7\eta+8)\) of applying an error drawn from \(\{I,X,Y,Z\}^{\otimes 2}\backslash(P_{B}\cup\{II\})\). This biased error model is a generalization of the depolarizing model. In particular, choosing \(\eta=1\) makes this noise model equivalent to the depolarizing one. On the other hand, when \(\eta=0\), the model leads to only \(X\) or \(Z\) errors applied after operations. In the other limiting case, as \(\eta\rightarrow\infty\), the model applies only \(Y\) errors after idling periods and gates. Given that the error probability is the same across all operations of the same type, we will refer to these error models as uniform circuit-level noise models. Finally, we also perform simulations of the recent experiment conducted by Google Quantum AI, using the error model which they provided together with the experimental data [25]. This is once again a circuit-level Pauli-noise model similar to the ones presented above, but the probability of a depolarizing error after each operation is based on the measured physical error rates. We will refer to this model as the experimental circuit-level noise model. We use _stim_[85] to perform the stabilizer simulations. We have written a wrapper package that helps with constructing the circuit for each experiment, which is available in [86]. We use _pymatching_[48] for the MWPM decoding. The weights used in the MWPM decoder are directly extracted from the sampled circuit using the built-in integration between _stim_ and _pymatching_. ### Neural network architecture Here we describe the NN architecture that we employ in this work, which nearly exactly follows the one proposed in [61, 63]. Many NN decoders studied previously are based on feed-forward or convolutional NN architecture. These decoders can generally decode experiments running a fixed number of QEC rounds. Decoders based on recurrent NN architectures, on the other hand, can learn the temporal correlations in the data, allowing them to directly process experiments performing a variable number of QEC rounds. We have used the _TensorFlow_ library [87] to implement the NN architecture, with the source code of the decoder available in [88], the parameters used for each training are listed in Table 1, while the scripts that perform the training are available upon request. The NN architecture takes as input the defects \(d_{a,r}\) with \(r=1,2,\ldots,N\). The decoder solves a binary classification problem and determines whether a correction of the logical observable is required based on the observed defects. In practice, the architecture is based on a two-headed network that makes two predictions \(p_{\text{main}}\) and \(p_{\text{aux}}\), which are used to improve the training of the network, see Fig. 2. To train a decoder, a series of memory experiments are performed. Since the logical qubit is prepared in a known logical state and measured at the end of each experiment, it is possible to extract the actual value \(p_{\text{true}}\in\{0,1\}\) of whether a correction is required or not. In particular, the cost function \(I\) that the network attempts to minimize during training is the weighted sum of the binary cross-entropies between each prediction and \(p_{\text{true}}\), expressed as \[I=H(p_{\text{main}},p_{\text{true}})+w_{a}H(p_{\text{aux}},p_{\text{true}}),\] where \(w_{a}\) is a weight that is typically chosen as \(w_{a}=0.5\) in our runs, while \[H(p_{i},p_{j})=-p_{i}\log p_{j}-(1-p_{i})\log(1-p_{j})\] is the binary cross-entropy function. The choice behind this loss function is elaborated below. Figure 2 schematically illustrates the architecture of the recurrent network. The recurrent body of the neural network consists of two stacked long short-term memory (LSTM) layers. Each LSTM layer is defined by a pair of internal memory states: a short-term memory, referred to as the hidden state, and a long-term memory, referred to as the cell state. Here, we use the same internal states size \(N_{L}\) for both LSTM layers [89, 90], with \(N_{L}=64,96,128\) for surface codes of distance \(d=3,5,7\), unless otherwise specified. The LSTM layers receive the defects for each QEC round as input, calculated from both the \(X\)-type and the \(Z\)-type stabilizer measurement outcomes. The first LSTM layer outputs a hidden state for each QEC round, which is then provided as input to the second LSTM layer, which outputs only its final hidden state. A rectified linear unit (ReLU) activation function is applied to the output of the second LSTM layer before being passed along to each of the two heads of the network. The heads of the network are feed-forward evaluation networks consisting of a single hidden layer of size \(N_{L}\) using the ReLU activation function and an output layer using the sigmoid activation function, which maps the hidden layer output to a probability used for binary classification. The output of the recurrent part of the network is directly passed to the lower head of the network, which uses this information to predict a probability \(p_{\text{aux}}\) of a logical error. The upper head also considers the defects inferred from the data qubit measurements, which are combined with the recurrent output and provided as input. Therefore, unlike the lower head, the upper one uses the full information about the errors that have occurred when making its prediction \(p_{\text{main}}\) of whether a logical error occurred. Both \(p_{\text{main}}\) and \(p_{\text{aux}}\) are used when training the network, which helps the neural network to generalize to handle longer input sequences. However, only \(p_{\text{main}}\) is used when evaluating the performance of the decoder. We provide additional details about the training procedure in Section VI.2 and list the hyper-parameters of the network in Table 1. ## III Results ### Performance on circuit-level noise simulations We first demonstrate that the NN decoder can achieve a lower logical error rate than the MWPM decoder by learning error correlations between the defects, which are otherwise ignored by the MPWM decoder. We consider the \(Y\)-biased circuit-level noise model described previously, parameterized by the bias \(\eta\) towards \(Y\) errors and a probability \(p=0.001\) of inserting an error after each operation. We use this noise model to simulate the performance of a \(d=3\) surface-code quantum memory experiment in the \(Z\)-basis, initially preparing either \(\ket{0}^{\otimes n}\) or \(\ket{1}^{\otimes n}\). To train the NN decoder, we generated datasets of \(r=1,5,\ldots,37\) QEC rounds, sampling \(5\times 10^{5}\) shots for each round and initial state. When evaluating the decoder's performance, we simulate the code performance over \(r=10,30,\ldots,290\) QEC rounds and sample \(2\times 10^{4}\) shots instead. To benchmark the logical performance, we calculate the logical fidelity \(F_{L}\) at the end of each experiment. Averaging \(F_{L}\) over each initial state, we fit the exponential decay of \(F_{L}\) with the number of QEC rounds to extract the logical error rate per round \(\varepsilon_{L}\). Figure 3 shows that the NN decoder maintains a constant \(\varepsilon_{L}\) when evaluated on datasets going up to 300 QEC rounds, demonstrating the ability of the decoder to generalize to significantly longer sequences than those used for training. On the other hand, the NN decoder achieves about 20% lower \(\varepsilon_{L}\) compared to the MWPM decoder. We then evaluate the trained NN decoder on simulated data using \(\eta\in\{0,0.5,1,2,10,100\}\) and keep all other parameters the same without training any new neural networks, with the resulting error rates shown in Figure 3**b**. At Figure 2: Schematic of the recurrent NN architecture used in this work, following the design proposed in [63]. The inputs to the network are the set of defects \(\{d_{a,r}\}\), which are calculated from the measurement outcomes of each ancilla qubit \(a\) at QEC round \(r=1,2,\ldots,N-1\), and the final defects \(\{d_{a,N}\}\), which are inferred from data qubit measurements. The time-invariant input \(\{d_{a,r}\}\) is provided to the recurrent part of the network, consisting of two stacked LSTM layers (yellow rectangles) and a ReLU activation layer (orange rectangle). The recurrent output is then passed to the two heads of the decoder, which consist of an evaluation layer (blue rectangle) that predict a probability of a logical error. The lower head takes as input only the recurrent output and outputs a probability \(p_{\text{aux}}\). The upper head, on the other hand, combines (teal rectangle) the recurrent output with \(\{d_{a,N}\}\) and outputs a probability \(p_{\text{main}}\). Arrows indicate the flow of information through the network. \(\eta=0\), corresponding to an error model leading to \(X\) and \(Z\) errors, the NN decoder displays a higher \(\varepsilon_{L}\) than the MWPM decoder. For \(\eta\geq 0.5\), the NN decoder instead demonstrates a lower logical error, with the relative reduction increasing with the bias. This demonstrates that the NN decoder can achieve a lower logical error rate by learning the correlations between the defects caused by \(Y\) errors, consistent with the results presented in [61]. The NN decoder can achieve an even lower logical error rate at a bias of \(\eta=100\) by being trained on a dataset generated using this bias (referred to as the adapted NN decoder in Figure 3). On the other hand, training a model for \(\eta=0\) does not lead to any improvement in \(\varepsilon_{L}\) of the NN decoder, showing that the MWPM decoder is more optimal in this setting. ### Performance on experimental data Next, we evaluate the performance of the NN decoder on experimental data available from the recent experiment executed by Google Quantum AI [25], where a 72-qubit quantum processor was used to implement a \(d=5\) surface code as well as the four \(d=3\) surface codes which use a subset of the qubits of the larger code. The stabilizer measurement circuits used in that experiment are the same as those shown in Figure 8. For each distance-\(d\) surface code, the data qubits are prepared in several random bitstrings, followed by \(r=25\) rounds of stabilizer measurement, followed by a logical measurement, with experiments performed in both the \(X\)-basis and \(Z\)-basis. The experiment demonstrated that the \(d=5\) surface code achieves a lower \(\varepsilon_{L}\) compared to the average of the four constituent \(d=3\) patches when using a tensor-network (TN) decoder, an approximation to a maximum-likelihood decoder. We find that training a NN decoder to achieve good Figure 3: **a** Logical fidelity \(F_{L}\) as a function of the number of QEC rounds \(r\) for the MWPM (blue) and the NN decoders (red) using a uniform circuit-level depolarizing noise model. Each data point is averaged over \(4\times 10^{4}\) shots. Solid lines show the fits to the data used to extract the logical error rate per round \(\varepsilon_{L}\). **b** The logical error rate \(\varepsilon_{L}\) as a function of the bias \(\eta\) towards \(Y\) errors for the MWPM decoder (blue) and a NN decoder trained on simulated data using depolarizing noise (red), corresponding to \(\eta=1\). The performance of an adapted NN decoder at a bias of \(\eta=0\) or \(\eta=100\) is shown in dark red. Each point is extracted from a fit of the decay of the logical fidelity over 300 QEC rounds. The error bar is smaller than the marker size. The error bars are smaller than the marker sizes. Figure 4: Logical fidelity \(F_{L}\) as a function of the number of QEC rounds \(r\) for the NN decoder evaluated on simulated data (shown in **a**) and on experimental data (shown in **b**). The average performance of the \(d=3\) surface code (red triangle), which is the average of the performance of each of the four constituent codes (bright red triangles), is compared to the \(d=5\) code (orange hexagons). Each data point is averaged over \(5\times 10^{4}\) shots for both experiment and simulation. Solid lines show the fits to the data used to extract the logical error rate per round \(\varepsilon_{L}\). The error bars are smaller than the marker sizes. logical performance requires a large number of shots (approximately \(10^{7}\) in total or more) obtained from experiments preparing different initial states and running a different number of rounds. As the amount of experimental data is too small to train the NN decoder (the total number of shots being \(6.5\times 10^{5}\)), we instead opt to simulate the experiments using the Pauli error model based on the measured error rates of each operation, available in [25]. Keeping the same number of rounds and prepared state, we generate a total of \(2\times 10^{7}\) shots for training the decoder for each \(d=3\) experiment and \(6\times 10^{7}\) to train the decoder for the \(d=5\) experiment, see Table 1. While we train the network on simulated data, we still evaluate the decoder performance on both simulated and the experimental data, with the results shown in Figure 4**a** and Figure 4**b** respectively. Both the training and evaluation data consist of \(r=1,3,\dots,25\) rounds of QEC and consider the same initial states. When evaluating the NN decoder on simulated data, we observe that the \(d=5\) code achieves a lower \(\varepsilon_{L}\) compared to the average of the \(d=3\) codes, see Figure 4**a. Evaluating the decoder on the experimental data leads to an approximately \(15\%\) (\(40\%\)) higher \(\varepsilon_{L}\) for the \(d=3\) (\(d=5\)) code, demonstrating that the approximate error model used in simulation fails to fully capture the errors in the experiment. Furthermore, we observe that the \(d=5\) has a higher \(\varepsilon_{L}\) instead, see Figure 4**a, contrary to what was demonstrated in [25] using a tensor-network decoder. In order to put the performance of the NN decoder in perspective, in Figure 5, we compare the logical performance of the NN decoder to the performance of several other decoders that were also implemented in [25]. We perform this comparison both on simulated (see Figure 5**a**) and experimental (see Figure 5**b**) data. We find that the NN decoder consistently outperforms the standard MWPM decoder in either case. On the experimental dataset, the NN decoder performs equivalent to the TN decoder when decoding the \(d=3\) surface codes. However, when decoding the \(d=5\) surface code experiment, the NN decoder displays a higher \(\varepsilon_{L}\) than the TN decoder and the computationally efficient belief-matching (BM) decoder [52]. When evaluated on simulated data, the NN and BM decoders exhibit similar error rates, with the NN decoder again demonstrating better performance when decoding the \(d=3\) code but worse when dealing with the \(d=5\) code. The BM decoder we use for the simulated data is described in [53] and uses the belief propagation implemented in [92]. The higher error rate of the NN decoder for the \(d=5\) code in both simulation and experiment can be related to the difficulty of optimizing the performance of the substantially larger NN model used (see Table 1 for the model hyper-parameters). However, the discrepancy in the experiment can also be attributed to a mismatch between the simulated data used for training (based on an approximate error model) and the experimental data used for evaluation. Compared to the \(d=3\) surface code data, the accumulation of qubit leakage can cause the \(d=5\) performance to degrade faster over the QEC rounds [25]. We expect that training on experimental data and a better hyper-parameter optimization to enable a NN performance comparable to state-of-the-art decoders like BM and TN while offering additional flexibility to the details of the noise model. Compared to the TN decoder, both NN and BM can achieve similar logical performance while remaining significantly faster, and if their implementation is optimized, they can potentially be used to decode experiments in real time. ### Logical error rate suppression An exponential suppression of the logical error rate, assuming that the physical error rates are below 'threshold', is vital for realizing a fault-tolerant quantum computer. We explore the error suppression achieved when using the NN decoder. We characterize the logical per Figure 5: The logical error rate per round \(\varepsilon_{L}\) for the \(d=3\) (red triangle) and \(d=5\) (orange hexagon) for several decoder implementations applied to either simulated data (shown in **a**) or experimental data (shown in **b**). These correspond (from left to right) to minimum-weight perfect matching (MWPM), a correlated modification of MWPM (Corr. MWPM) [91], our neural network (NN) decoder, belief matching (BM) [52], and a tensor network (TN) decoder, which approximates maximum-likelihood decoding. We did not run the corr. MWPM or TN decoder on the simulated data so fewer data points appear in **a**. All logical error rates on the experimental data, except for the NN decoder, are taken from [25]. The error bars are smaller than the marker sizes. formance of \(d=3,5,7\) surface codes simulated using a uniform depolarizing circuit-level noise model with an error probability of \(p=0.1\%\), close to the state-of-the-art physical error rates achieved in the experiment. To train the NN decoder, we use data generated using this error probability. We find that also training using a higher probability of \(p=0.2\%\) leads to a significantly lower logical error rate for the \(d=7\) code. Furthermore, we evaluate the performance of the NN decoder on data simulated using \(p=0.05\%\), which is an example of the physical error rate needed to achieve practical sub-threshold scaling of the error rate. For each distance \(d\) and error probability \(p\), we perform simulations of memory experiments in the \(Z\)-basis with varying numbers of QEC rounds, going up to \(600\) rounds for the \(d=7\) code with an error rate of \(p=0.05\%\) to extract the logical error per round \(\varepsilon_{L}\). The logical error rates obtained when using an MWPM decoder are shown in Figure 6**a**, while those achieved by the NN decoder are shown in Figure 6**b**. If the physical error rate is below threshold, \(\varepsilon_{L}\) is expected to decay exponentially with the code distance \(d\), following \(\varepsilon_{L}\left(d\right)=C/\Lambda^{(d+1)/2}\), where \(\Lambda\) is the suppression factor and \(C\) is a fitting constant. The data shows an apparent exponential suppression of the error rates by either decoder for the considered error rates, which we fit to extract the suppression factor \(\Lambda\), shown in Figure 6. In either case, the NN decoder achieves better logical performance compared to the MWPM decoder. While for \(p=0.1\%\), the NN decoder achieves an approximately \(13\%\) higher \(\Lambda\), for \(p=0.05\%\), the more accurate NN decoder leads roughly twice as high \(\Lambda\). The higher suppression factors \(\Lambda\) obtained from using better decoders significantly reduces the code distance required to achieve algorithmically-relevant logical error rates. For example, for an error rate of \(p=0.05\%\), realizing \(\varepsilon_{L}\approx 10^{-10}\) would require a \(d=19\) surface code when using the MWPM decoder and \(d=15\) when using the NN decoder, corresponding to roughly \(40\%\) less physical qubits required. However, whether the NN can continue to exhibit similar performance when decoding higher distance codes remains to be demonstrated. ### Decoding with soft information Measurements of physical qubits generally produce a continuous signal that is subsequently converted into declared binary outcomes by classical processing and thresholding. For example, transmon qubits are dispersively coupled to a dedicated readout resonator, which itself is connected to a readout feedline. Readout is performed by applying a microwave pulse to the feedline, populating the readout resonator. Due to a state-dependent shift of the resonator frequency, the outgoing signal is phase-shifted depending on whether the qubit is in the state \(\left|0\right\rangle\) or \(\left|1\right\rangle\). This leads to a change in the real and imaginary components of the outgoing signal, which is experimentally measured. This two-dimensional output can be transformed into a single continuous real variable and converted to a binary outcome by applying some threshold calibrated using a separate experiment [31, 81, 82]. While binary variables are convenient to work with and store, continuous measurement outcomes hold much more information about the state of the qubit, referred to as soft information. It has been demonstrated that an MWPM-based decoder which considers the soft information of the individual measurements when decoding, offers higher thresholds and lower logical error rates than a hard decoder, which only considers the binary outcomes [83]. To demonstrate the flexibility of machine-learning decoders, we consider providing the soft information available from readout when training and evaluating the NN decoder. In our simulations, measurements project the qubit into either \(\left|0\right\rangle\) or \(\left|1\right\rangle\). A measurement outcome \(m_{r,q}=i\) of qubit \(q\) at round \(r\) corresponds to the ancilla qubit being in \(\left|i\right\rangle\) directly after the measurement. Given \(m_{r,q}=i\), we model the soft outcome \(\tilde{m}_{r,q}\in\mathbb{R}\) to follow a Gaussian Figure 6: The logical error rate per round \(\varepsilon_{L}\) for surface codes of distance \(d=3,5,7\) for an MWPM decoder, shown in **a**, and our NN decoder, shown in **b**. This is evaluated on datasets using a uniform depolarizing circuit-level noise model with error probabilities of \(p=0.1\%\) (blue for the MWPM, red for the NN decoder) and \(p=0.05\%\) (teal for the MWPM, orange for the NN decoder). Solid lines show the fits to the data used to extract the logical error suppression factor \(\Lambda\). Each data point is extracted from a fit to the \(F_{L}\) as a function of QEC rounds. The logical fidelities are extracted over \(10^{5}\) shots. The error bars are smaller than the marker sizes. distribution \(\mathcal{N}_{i}\) with mean \(\mu_{i}\) and standard deviation \(\sigma\). The soft outcome \(\tilde{m}_{r,q}\) can then be converted to a binary outcome \(\tilde{m}_{r,q}\) by introducing a threshold \(t\), such that \[\tilde{m}_{r,a}=\begin{cases}0&\text{if }\tilde{m}_{r,a}\leq t,\\ 1&\text{otherwise.}\end{cases}\] For the symmetric Gaussian distributions that we consider, this process leads to an assignment error probability \(P(\tilde{m}_{r,q}=0\mid m_{r,q}=1)=P(\tilde{m}_{r,q}=1\mid m_{r,q}=0)=p_{m}\). This assignment error is _added_ to the errors considered in our circuit-level noise models, specifically the \(X\) error before each measurement that happens with a probability \(p\). The assignment error probability can related to the signal-to-noise ratio \(\text{SNR}=\left|\mu_{0}-\mu_{1}\right|/2\sigma\) as \(p_{m}=\frac{1}{2}\text{erfc}\left(\frac{\text{SNR}}{\sqrt{2}}\right)\). We fix \(\mu_{0}=-1\) and \(\mu_{1}=1\) such that a given probability \(p_{m}\) fixes the standard deviation \(\sigma\) of the two distributions. The most straightforward approach to incorporating the soft information into the NN decoder is to directly provide the soft measurement outcomes \(\tilde{m}_{r,q}\) as input during training and evaluation. However, we find that doing this leads to an overall poor logical performance. Instead, we estimate the probability of a defect \(P(d_{r,a}=1\mid\tilde{m}_{r,a},\tilde{m}_{r-1,a})\), given the soft measurement outcomes of an ancilla qubit \(a\) in consecutive QEC rounds. Given a soft outcome \(\tilde{m}_{r,q}\), the probability of the measured qubit 'having being in the state' \(\left|i\right\rangle\) can be expressed as \[P(i\mid\tilde{m}_{r,q})=\frac{P(\tilde{m}_{r,q}\mid i)P(i)}{\sum_{j\in\{1,2 \}}P(\tilde{m}_{r,q}\mid j)P(j)}.\] The soft outcomes follow a Gaussian distribution, that is, \(P(\tilde{m}_{r,q}\mid i)=\mathcal{N}_{i}(\tilde{m}_{r,q})\). Finally, we make the simplifying assumption that the prior state probabilities \(P(i)=P(j)=\frac{1}{2}\), such that \[P(i\mid\tilde{m}_{r,q})=\frac{\mathcal{N}_{i}(\tilde{m}_{r,q})}{\sum_{j\in\{1,2\}}\mathcal{N}_{j}(\tilde{m}_{r,q})}.\] The probability of observing a defect can then be expressed as \[P(d_{r,a}=1\mid\tilde{m}_{r,a},\tilde{m}_{r-1,a})=\] \[1-\sum_{i\in\{0,1\}}P(i\mid\tilde{m}_{r,a})P(i\mid\tilde{m}_{r- 1,a}).\] The expression for the defect probability inferred from using the soft (final) data qubit measurement outcomes can be derived similarly. To explore the performance of the soft NN decoder, we simulate the \(d=3\) surface-code memory experiment using a circuit-level noise model with an error rate per operation of \(p=0.1\%\). We consider two separate assignment error probabilities \(p_{m}^{a}\) and \(p_{m}^{d}\) for ancilla qubit and data qubit measurements. We motivate this choice by the fact that data qubits remain idling while the ancilla qubits are being measured. A shorter measurement time can reduce the decoherence experienced by the data qubits but will typically lead to a higher \(p_{m}^{a}\). The data qubit measurements at the end of the experiment, on the other hand, can be optimized to minimize \(p_{m}^{d}\). Therefore, we focus on how a soft decoder can help with decoding when \(p_{m}^{a}\) is higher, similar to the discussion in [83]. We train the NN decoder using datasets of \(r=1,5,\ldots,37\) QEC rounds, sampling \(5\times 10^{5}\) shots for each round and initial logical state. When evaluating the performance, we use simulate \(r=10,30,\ldots,150\) QEC rounds, sampling \(5\times 10^{4}\) shots instead. The results for \(p_{m}^{a}=1\%\) are shown in Figure 7**a**. The hard NN decoder achieves an approximately \(20\%\) lower logical error rate compared to an MWPM decoder, consistent with the results shown in Figure 3. In comparison, the soft NN decoder leads to an approximately \(30\%\) Figure 7: **a** The logical fidelity \(F_{L}\) as a function of the number of QEC rounds \(r\) for an MWPM decoder (blue circles), a soft NN decoder (gray diamonds) that uses the probability of observing defects and a hard NN decoder (red pentagons) that uses the defects obtained from the hard measurement outcomes. This performance is estimated on simulated data using a uniform depolarizing circuit-level noise model with an error probability \(p=0.1\%\). The soft outcome distributions are such that ancilla and data qubits have a probability of assignment errors of \(p_{m}^{a}=1\%\) and \(p_{m}^{d}=0.1\%\), respectively. Solid lines show the fits to the data used to extract the logical error rate per round \(\varepsilon_{L}\). Each data is averaged over \(10^{5}\) shots. **b** The extracted logical error rate \(\varepsilon_{L}\) for each of the three decoders as a function of the ancilla qubit assignment error probability \(p_{m}^{a}\), keeping \(p_{m}^{d}=0.1\%\) and \(p=0.1\%\). The error bars are smaller than the marker sizes. lower logical error rate instead, demonstrating the ability of the decoder to adapt to the provided soft information. In Figure 7**b** the logical error rate \(\varepsilon_{L}\) of the three decoders is shown for \(p_{m}^{a}\in\{0,0.1\%,1\%,1\%\}\), where both NN decoders are trained at the corresponding \(p_{m}^{a}\). For low \(p_{m}^{a}\), the performance of the soft NN decoder is essentially equivalent to the hard NN decoder, with a moderate reduction in \(\varepsilon_{L}\) achieved for \(p_{m}^{a}\geq 1\%\). It is possible that the probability of defects is not the optimal way to provide the soft information to the decoder. One downside of this representation is that for a high assignment error probability \(p_{m}^{a}\geq 20\%\), the probability of observing a defect is close to \(50\%\), which also impacts the training and leads the soft NN decoder to exhibit a higher logical error rate compared to the hard one (not shown in Figure 7). Optimizing the performance of the soft NN decoder and comparing it to alternative approaches, namely the soft MWPM decoder proposed in [83], remains an open question. ## IV Discussion We now discuss in more detail the performance of the NN decoder on the experimental data. Unfortunately, we only use simulated data to train the NN decoder throughout this work. These simulations use approximate Pauli-noise models that account for the most significant error mechanisms in the experiment, such as decoherence and readout errors. However, they do not include several important error sources present in the actual experiments, such as leakage, crosstalk, and stray interactions. The exclusion of these error mechanisms leads to the Pauli-noise models underpredicting the logical error rate compared to the rates observed in the experiment, as observed in Figure 4. Furthermore, it was shown that the \(d=5\) code is more sensitive to errors like leakage and crosstalk, which can lead to a more significant deviation relative to simulations of the \(d=3\) codes [25]. Despite using these approximate models for training, when evaluating the NN decoder on experimental data, we observe that it outperforms MWPM and can achieve logical error rates comparable to those obtained using maximum-likelihood decoding, which is approximated by the TN decoder. The TN decoder requires information about the error probabilities, what defects they lead to, and their corresponding corrections, which can be encoded into a hypergraph, where the nodes correspond to defects and the hyperedges represent errors. Importantly, this hypergraph also does not explicitly include hyperedges corresponding to non-conventional errors, such as leakage or crosstalk. We expect that training on experimental data and optimizing the hyper-parameters of the network will enable it to match the performance of the TN decoder closely and potentially exceed it by learning about errors not included in the hypergraph. Despite the large volume of training data required to achieve good performance, we don't expect that generating sufficient experimental data for training will be an issue. Assuming that the QEC round duration is \(1\)\(\mu\)s and that it takes \(200\) ns to reset all qubits between subsequent runs, we estimate that it would take approximately three minutes to generate the datasets with \(10^{7}\) shots running \(r=1,5,\ldots,37\) rounds of QEC that were used for training the \(d=3,5,7\) surface codes, see Table 1. The soft NN decoder used in this work achieves only a moderate performance increase. A direct comparison of this decoder with the soft MWPM decoder [83] will be useful to put this performance into perspective. It is possible that using the defect probabilities as the decoder input is not an optimal choice. An alternative approach to incorporating the soft information into the decoder is to estimate the likelihood of an assignment error \(L_{r,a}=\mathcal{N}_{\neg i}(\tilde{m}_{r,a})/\mathcal{N}_{i}(\tilde{m}_{r,a})\) given a soft outcome \(\tilde{m}_{r,a}\) that leads to a hard outcome of \(\tilde{m}_{r,a}=i\), which is used by the soft MWPM decoder proposed in [83]. The likelihoods \(L_{r,a}\) can then be provided as input to the NN decoder together with the binary defects \(d_{r,a}\) that were measured. In addition to the representation of the input data, it is an open question whether using a soft NN decoder will be useful in practice, where assignment error rates are typically low. Specifically, it would be interesting to see if using a soft NN decoder will enable using a shorter measurement time that might lead to a higher assignment error rate but maximize the logical performance overall, as discussed in [83]. The symmetric Gaussian distributions of the continuous measurement outcomes we consider here are only very simple approximations of the distributions seen in experiments, and in our modeling we could adapt these. In particular, the relaxation that the qubit experiences during the readout leads to an asymmetry between the distributions and a generally higher probability of an assignment error when the qubit was prepared in \(|1\rangle\). Furthermore, the continuous outcomes observed in the experiment can also contain information about leakage [93; 27; 94] or correlations with other measurements. Therefore, it will be essential to investigate and optimize the performance of the soft decoders using experimental data. Finally, we outline some possible directions for future research necessary to use these decoders for decoding large-distance experiments. Decoders based on feedforward and convolutional architectures have been shown to achieve low-latency decoding, making them a possible candidate for being used in real time [75; 76; 77; 78]. On the other hand, recurrent networks generally have a larger number of parameters and carry out more complex operations when processing the data. However, recurrent NN decoders have been shown to achieve higher accuracy and be more easily trainable than other architectures, especially when considering realistic noise models [68]. Therefore, whether hardware implementations of recurrent NN decoders can be used for real-time decoding is an open question. In addition to the latency, the scalability of NN decoders is an open question. Decoding higher-distance codes will require larger neural networks and larger training datasets, which will most likely be more challenging to train, given that approaches based on machine learning generally struggle when the dimension of the input becomes very large. Practically, one might be interested in whether the NN decoder can be trained and used to decode some finite code distance, which is expected to lead to algorithmically-relevant logical error rates given the processor's performance. Alternatively, there exist approaches that enable scalable NN decoders. These are typically based on convolutional neural networks that learn to infer and correct the physical errors that have occurred while a secondary global decoder handles any possibly remaining errors [75, 77], but a purely convolutional NN method has been explored as well [65]. The recurrent NN decoder used in this work is not scalable, and adapting it to work with larger code distances and using it to decode through logical operations is another open research venue. Lastly, while preparing this manuscript, we became aware of a similar work [95] that explores the performance of a graph neural network decoder on data from the repetition code experiment that was also done in [25]. ###### Acknowledgements. We are grateful to Earl Campbell for insightful discussions and for comments on the manuscript. We also thank Laura Caune for implementing the belief-matching decoder that we have used in this work. B. M. V. and B. M. T. are supported by QuTech NWO funding 2020-2024 - Part I "Fundamental Research" with project number 601.QT.001-1. ## V Data and software availability The data and software that support the plots presented in this figure are available at [96]. The raw simulated data and the scripts used for training and decoding this data are available upon reasonable request. ## VI Appendix ### Quantum memory experiments To characterize the logical performance of a surface code, we look at its ability to maintain an initial logical state as a function of the number of QEC rounds, commonly referred to as a quantum memory experiment. The circuits used to perform these experiments are illustrated in Figure 8 and follow the ones used in the recent \(d=5\) surface code experiment done by Google Quantum AI [25]. Removing some of the Hadamard gates when compiling the stabilizer measurement circuits leads to each ancilla qubit measuring the \(ZXXZ\) operator instead of the standard \(XXXX\) and \(ZZZZ\) stabilizers of the surface code. Implementing this \(ZXXZ\) variant of the surface code symmetrizes the logical error rates between experiments done in the logical \(X\)-basis or \(Z\)-basis [25]. Despite this modification, we use notations associated with the traditional stabilizers measured by the surface code. Each experiment begins by preparing a given logical state, performed by the circuits in Figure 8**a-d**. The data qubits are first initialized in the ground state and then prepared in either \(\ket{0}\) or \(\ket{1}\) by a layer of conditional \(X\) gates. A subset of the data qubits is then rotated and transforms the initial state into an eigenstate of the \(X\)- or \(Z\)-type stabilizers. The parity of the initial bistring state determines whether \(\ket{0}_{L}\) or \(\ket{1}_{L}\) (\(\ket{+}_{L}\) or \(\ket{-}_{L}\)) is prepared if the experiment is done in the \(Z\)-basis (\(X\)-basis). In simulation, we prepare either \(\ket{0}^{\otimes n}\) or \(\ket{1}^{\otimes n}\) when using uniform circuit-level noise models. In the experiment, several random bitstring states are used in order to symmetrize the impact of amplitude damping [25]. The prepared logical state is then maintained over a total of \(r\in\{1,2,\ldots,N-1\}\) QEC rounds, with the circuit given by Figure 8**e-o**. The first QEC round then projects this initial state into a simultaneous eigenstate of both the \(X\)- or \(Z\)-type stabilizers. Each cycle involves a series of four interactions between each ancilla qubit and its neighboring data qubits, which map the \(X\) or \(Z\) parity onto the state of the ancilla qubit. The order in which these two-qubit operations are executed is carefully chosen to minimize the impact of errors occurring during the execution of the circuit [84]. At the end of each QEC round, all of the ancilla qubits are measured and reset. The stabilizer measurement circuits also contain several \(X\) gates on either the data or ancilla qubits, which dynamically decouple the qubits in the experiment [25]. Naturally, these gates do not improve the logical performance for the simulations using approximate Pauli-error models that we consider here. In the final QEC round, the data qubits rotated during the state preparation are rotated back and measured in the \(Z\)-basis together with the ancilla qubits, illustrated in Figure 8**p-r**. The data qubit measurement outcomes are then used to calculate the value of the \(X_{L}\) or \(Z_{L}\) logical observable as well as to infer a final set of \(X\)- or \(Z\)-type stabilizer measurement outcomes. ### Decoder training and evaluation Here we provide additional details about how we train the NN decoder and the hyper-parameters we use. We use the Adam optimizer typically with a learning rate of \(10^{-3}\) or \(5\times 10^{-4}\) for training. In addition, we apply dropout after the hidden layer of the feed-forward network of each head and, in some cases, after the second LSTM layer with a dropout rate of either 20% or 5% to avoid over-fitting and assist with the generalization of the network. We use a batch size of 256 or 64, which we found to lead to a smoother minimization of the loss. After each training epoch, we evaluate the loss of the network on a separate dataset that considers the same number of QEC rounds and prepared states as the training dataset but samples fewer shots for each experiment. The results are shown in Figure 8. The results are shown in Figure 8. iment. After each epoch, we save the networks' weights if a lower loss has been achieved. Furthermore, we use early stopping to end the training if the loss has not decreased over the last 20 epochs to reduce the time it takes to train each model. We have observed that not using early-stopping and leaving the training to continue does not typically lead the network to reach a lower loss eventually. For some datasets, we lower the learning rate after the initial training has stopped early and train the network once more to achieve better performance. The hyper-parameters we have used for training each network and the parameters of the training datasets used are presented in Table 1. The NN architecture we employ in this work uses two stacked LSTM layers to process the recurrent input [63]. We observe poor logical performance for a \(d=3\) surface code when using only a single LSTM layer. On the other hand, we see no significant improvement in the logical error rate when using four layers instead, motivating the choice to use only two. This network architecture also performs well when decoding \(d=5\) and \(d=7\) surface code experiments. However, we expect that a deeper recurrent network might improve the logical error rates when decoding larger-distance codes or when training on and decoding experimental data. We have also practically observed that training the NN decoder for larger distances is more challenging, especially if the physical error rates are small. Training the neural network on a dataset with a higher physical error rate (in addition to data using the same error rate as the evaluation dataset) can also improve the performance of the decoder, as we also discussed in Section III.3. The training of our neural networks was performed on the DelftBlue supercomputer [97] and was carried out on an NVIDIA Tesla V100S GPU. Once trained, the decoder takes approximately 0.7 seconds per QEC round for a \(d=3\) surface code (corresponding to an internal state size of \(N_{L}=64\)) using a batch size of 50000 shots on an Intel(R) Core(TM) i7-8850H CPU @ 2.60GHz. For a \(d=5\) surface code (\(N_{L}=96\)), it takes about 0.8 seconds per round, while for a \(d=7\) surface code (\(N_{L}=128\)), it takes about 1.1 seconds per round, using the same batch size of 50000 shots. We note that using smaller batch sizes leads to a higher overall runtime due to parallelism when the network processes the inputs. Therefore, larger batch sizes are preferable as long as they fit into the memory. Each runtime was extracted by decoding simulated datasets running \(r=10,30,\ldots,290\) rounds of QEC and averaging the runtime per QEC round over all the datasets.
2302.10344
Model-based feature selection for neural networks: A mixed-integer programming approach
In this work, we develop a novel input feature selection framework for ReLU-based deep neural networks (DNNs), which builds upon a mixed-integer optimization approach. While the method is generally applicable to various classification tasks, we focus on finding input features for image classification for clarity of presentation. The idea is to use a trained DNN, or an ensemble of trained DNNs, to identify the salient input features. The input feature selection is formulated as a sequence of mixed-integer linear programming (MILP) problems that find sets of sparse inputs that maximize the classification confidence of each category. These ''inverse'' problems are regularized by the number of inputs selected for each category and by distribution constraints. Numerical results on the well-known MNIST and FashionMNIST datasets show that the proposed input feature selection allows us to drastically reduce the size of the input to $\sim$15\% while maintaining a good classification accuracy. This allows us to design DNNs with significantly fewer connections, reducing computational effort and producing DNNs that are more robust towards adversarial attacks.
Shudian Zhao, Calvin Tsay, Jan Kronqvist
2023-02-20T22:19:50Z
http://arxiv.org/abs/2302.10344v1
# Model-based feature selection for neural networks: A mixed-integer programming approach+ ###### Abstract In this work, we develop a novel input feature selection framework for ReLU-based deep neural networks (DNNs), which builds upon a mixed-integer optimization approach. While the method is generally applicable to various classification tasks, we focus on finding input features for image classification for clarity of presentation. The idea is to use a trained DNN, or an ensemble of trained DNNs, to identify the salient input features. The input feature selection is formulated as a sequence of mixed-integer linear programming (MILP) problems that find sets of sparse inputs that maximize the classification confidence of each category. These "inverse" problems are regularized by the number of inputs selected for each category and by distribution constraints. Numerical results on the well-known MNIST and FashionMNIST datasets show that the proposed input feature selection allows us to drastically reduce the size of the input to \(\sim\)15% while maintaining a good classification accuracy. This allows us to design DNNs with significantly fewer connections, reducing computational effort and producing DNNs that are more robust towards adversarial attacks. _Keywords--_ Mixed-integer programming, Deep neural networks, Feature selection, Sparse DNNs, Model reduction. ## 1 Introduction Over the years, there has been an active interest in algorithms for training sparse deep neural networks (DNNs) or sparsifying trained DNNs. By sparsifying a DNN we mean removing some connections (parameters) in the network, which can be done by setting the corresponding weights to zero. Examples of algorithms for sparsifying or training sparse networks include, dropout methods [8; 14; 15; 31], optimal/combinatorial brain surgeon [13; 36], optimal brain damage [19], and regularization based methods [21; 23; 33]. Carefully sparsifying the network, _i.e.,_ reducing the number of parameters wisely, has shown to reduce over-fitting and improve overall generalizability [14, 18]. This paper focuses on feature selection for DNNs, which can also be interpreted as "sparsifying" the first/input layer, and we show that we can significantly reduce the number of parameters, _i.e._, non-zero weights, while keeping a good accuracy. Throughout the paper we focus on image classification, but the framework is general. This work focuses on finding the salient input features for classification using a DNN. We hypothesize that the number of inputs to DNNs for classification can often be greatly reduced by a "smart" choice of input features while keeping a good accuracy (_i.e.,_ feature selection). We build the hypothesis on the assumption that not all inputs, or pixels, will be equally important. Reducing the number of inputs/parameters has the potential to: i) reduce over-fitting, ii) give more robust DNNs that are less sensitive for adversarial attacks (fewer degrees of freedom for the attacker), and iii) reduce computational complexity both in training and evaluating the resulting classifier (fewer weights to determine and fewer computational operations to evaluate the outputs). The first two are classical focus areas within artificial intelligence (AI), and the third is becoming more important with an increasing interest in so-called green AI [26]. Most strategies for feature selection can be grouped as either _filter_ methods, which examine the data, _e.g.,_ for correlation, and _wrapper_ methods, which amount to a guided search over candidate models [4, 20]. Feature selection can also be incorporated directly into model training using _embedded_ methods, _e.g._, regularization. This paper and the numerical results are intended as a proof of concept to demonstrate mixed-integer linear programming (MILP) as an alternative technology for extracting the importance of input features from DNNs. Input feature selection is an active research area, _e.g.,_ see the review papers [9, 37], and a detailed comparison to state-of-the-art methods is not within the scope of this paper. Our proposed method leverages trained models that achieve desirable performance, and attempts to select a feature set that replicates the performance using mixed-integer programming. We build on the idea that, given a relatively well-trained DNN, we can analyze the DNN in an inverse fashion to derive information about the inputs. Specifically, to determine the most important inputs, or pixels in the case of image classification, for a given label, we solve an optimization problem that maximizes the classification confidence of the label with a cardinality constraint on the number of non-zero inputs to the DNN. We consider this as an "inverse problem," as the goal is to determine the DNN inputs from the output. We additionally propose some input distribution constraints to make the input feature selection less sensitive to errors in the input-output mapping of the DNN. We only consider DNNs with the rectified linear unit (ReLU) activation function, as it enables the input feature selection problem to be formulated as a MILP problem [7, 22]. However, the framework can be easily generalized to CNN architectures and other MIP representable activation functions, e.g., max pooling and leaky ReLU Optimizing over trained ReLU-based DNNs has been an active research topic in recent years, and has a wide variety of applications including verification [2, 22, 28], lossless compression [27], and surrogate model optimization [11, 35]. There even exists software, such as OMLT [3], for directly incorporating ReLU DNNs into general optimization models. Optimizing over a trained ReLU-based DNN through the MILP encoding is not a trivial task, but significant progress has been made in terms of strong formulations [1, 16, 29], solution methods [5, 25], and techniques for deriving strong valid inequalities [1, 2]. In combination with the remarkable performance of state-of-the-art MILP solvers, optimization over DNNs appears computationally tractable (at least for moderate size DNNs). This work builds upon recent optimization advancements, as reliably optimizing over ReLU-based DNNs is a key component in the proposed method. The paper is structured as follows. Section 2 first describes the MILP problem to determine which inputs maximize the classification confidence. Some enhancements for the input selection are presented, and the complete input selection algorithm is presented in Section 2.4. Numerical results are presented in Section 3, where we show that we can obtain a good accuracy when downsizing the input to 15% by the proposed algorithm, and that the resulting DNNs are more robust towards adversarial attacks in the \(\ell_{\infty}\) sense. Section 4 provides some conclusions. ## 2 Input feature selection algorithm Our feature selection strategy is based on the idea of determining a small optimal subset of inputs, or pixels for the case of image classification, that are allowed to take non-zero values to maximize the classification confidence for each label using a pre-trained DNN. By combining the optimal subsets for each label, we can determine a set of salient input features. These input features can be considered as the most important for the given DNN, but we note that the DNN might not offer a perfect input-output mapping. To mitigate the impact of model errors, we propose a technique of using input distribution constraints to ensure that the selected input features are to some extent distributed over the input space. This framework could easily be extended to use optimization over an ensemble DNN model [32] for input selection, where the inputs would be selected such that the ensemble classification confidence is maximized. While using DNN ensembles can further mitigate the effect of errors in individual DNNs, our initial tests did not indicate clear advantages of using DNN ensembles for this purpose. Here we focus on fully connected DNNs that classify grayscale images into 10 categories. While this setting is limited, the proposed method is applicable to classification of RGB images, and other classification problems in general. The input features are scaled between 0 and 1, with 255 (white) corresponding to 1 and 0 remaining black. We start by briefly reviewing MILP encoding of DNNs in the next subsection, and continue with more details on the input feature selection in the following subsections. ### Encoding DNNs as MILPs In a fully connected ReLU-based neural network, the \(l\)-th layer with input \(x^{l}\) and output \(x^{l+1}\) is described as \[x^{l+1}=\max\{0,W^{l}x^{l}+b^{l}\},\] where \(W^{l}\in\mathbb{R}^{n_{l+1}\times n_{l}}\) is the weight matrix and \(b^{l}\in\mathbb{R}^{n_{l+1}}\) is the bias vector. The input-output mapping of the ReLU activation function is given by a piece-wise linear function, and is mixed-integer representable [30]. There are different formulations for encoding the ReLU activation function using MILP, where the big-M formulation [7, 22] was the first presented MILP encoding and remains a common approach. For the \(i\)-th ReLU node at a fully connected layer with input \(x^{l}\), the big-M formulation for input-output relation is given by \[\begin{gathered}(w_{i}^{l})^{\top}x^{l}+b_{i}^{l}\leq x_{i}^{l+1},\\ (w_{i}^{l})^{\top}x^{l}+b_{i}^{l}-(1-\sigma)LB_{i}^{l+1}\geq x_{i}^{l +1},\\ x_{i}^{l+1}\leq\sigma UB_{i}^{l+1},\\ \sigma\in\{0,1\},\ x_{i}^{l+1}\geq 0,\end{gathered} \tag{1}\] where \(w_{i}^{l}\) is the \(i\)-th row vector of \(W^{l}\), \(b_{i}^{l}\) is the \(i\)-th entry of \(b^{l}\), \(LB_{i}^{l+1}\) and \(UB_{i}^{l+1}\) are upper and lower bounds on the pre-activation function over \(x_{i}^{l+1}\), such that \(LB_{i}^{l+1}\leq(w_{i}^{l})^{\top}x^{l}+b_{i}^{l}\leq UB_{i}^{l+1}\). The big-M formulation is elegant in its simplicity, but it is known to have a weak continuous relaxation which may require the exploration of a huge number of branch-and-bound nodes in order to solve the problem [1]. Anderson et al. [1] presented a so-called extended convex hull formulation, which gives the strongest valid convex relaxation of each individual node, and a non-extended convex hull formulation. Even though the convex hull is the strongest formulation for each individual node, it does not in general give the convex hull of the full input-output mapping of the DNN. Furthermore, the convex hull formulation results in a large problem formulation that can be computationally difficult to work with. The class of partition-based, or \(P\)-split, formulations, was proposed as an alternative formulation with a stronger continuous relaxation than big-M and computationally cheaper than the convex hull [16, 29]. Computational results in [17, 29] show that the partition-based formulation often gives significant speed-ups compared to the big-M or convex hull formulations. Here, we do not focus on the computational efficiency of optimizing over ReLU-based DNNs, and, for the sake of clarity, we use the simpler big-M formulation (1). In fact, for the problems considered in this work, the computational time to solve the MILP problems did not represent a limiting factor (big-M tends to actually perform relatively well for simple optimization problems). But, alternative/stronger formulations could directly be used within our framework. ### The optimal sparse input features (OSIF) problem With the MILP encoding of the DNN, we can rigorously analyze the DNN and find extreme points in the input-output mapping. Recently Kronqvist et al. [17] illustrated a similar optimal sparse input features (OSIF) problem. This problem aims to maximize the probability of at most \(\aleph\) non-zero input features being classified with a certain label \(i\) for a given trained DNN. The problem is formed by encoding the ReLU activation function for each hidden node by MILP. Instead of the softmax function, the objective function is \(x_{i}^{L}\), where \(x^{L}\) is the output vector, thus maximizing the classification confidence of label \(i\). Using the big-M formulation, the OSIF problem can be stated as max \[x_{i}^{L}\] (2a) s.t. \[W^{l}x^{l}+b^{l}\leq x^{l+1},\ \forall l\in[L-1],\] (2b) \[W^{l}x^{l}+b^{l}-\text{diag}(LB^{l+1})(\mathbf{1}-\sigma^{l+1}) \geq x^{l+1},\ \forall\ l\in[L-1],\] (2c) \[x^{l}\leq\text{diag}(UB^{l})\sigma^{l},\ \sigma^{l}\in\{0,1\}^{n_{l}}, \forall\ l\in\{2,\ldots,L\},\] (2d) \[x^{L}=W^{L-1}x^{L-1}+b^{L-1},x^{L}\in\mathbb{R}^{10},\] (2e) \[x^{l}\in\mathbb{R}_{+}^{n_{l}},\forall\ l\in[L-1],\] (2f) \[y\geq x^{1},y\in\{0,1\}^{n_{1}},\] (2g) \[\mathbf{1}^{\top}y\leq\aleph_{0},\] (2h) where \(n_{1}\) is the size of the input data, \(x^{L}\in\mathbb{R}^{10}\) is the output, \(L\) is the number of layers, \(LB^{l}\) and \(UB^{l}\) are the bounds on \(x^{l}\), \(\mathbf{1}\) denotes the all-ones vector, \(\text{diag}(\cdot)\) denote the matrix with \(\cdot\) on the diagonal and \(0\) on other entries. Eq. (2g) and (2h) describe the cardinality constraint \(\|x^{1}\|_{0}\leq\aleph_{0}\), which limits the number of selected inputs. Fig 1 shows some example results of solving problem (2) with \(\aleph\in\{10,20\}\) and class \(i\in\{0,1\}\). Given a larger cardinality number \(\aleph\), the latter is more visually recognizable from the selected pixels. ### Input distribution constraints To extract information across the whole input image, we propose to add constraints to force the selected pixels to be distributed evenly across some pre-defined partitioning of the input space. Forcing the selected pixels to be more spread-out may also mitigate the effect of inaccuracy of the DNN used in the OSIF problem, _e.g.,_ by preventing a small area to be given overly high priority. There are various ways to partition the input variables. In this Figure 1: The results of OSIF for class \(0\) and \(1\) on MNIST (\(\aleph_{0}\in\{10,20\}\)). Note that the selected pixels are white. paper, we focus on image classification problems with square images as input. Furthermore, we assume that the images are roughly centered. Therefore, we denote each input variable as matrix \(X\in\mathbb{R}^{n\times n}\) and define the partition as \(k^{2}\) submatrices of equal size, _i.e.,_\(X^{ij}\in\mathbb{R}^{\frac{n}{k}\times\frac{n}{k}}\) for \(i,j\in[k]\). For instance, given \(n\) is even and \(k=2\), a natural partition of the matrix is \[X=\begin{pmatrix}X^{11}&X^{12}\\ X^{21}&X^{22}\end{pmatrix}.\] In this way, we denote \(x^{1}:=\text{vec}(X)\) the input data and \(n_{1}:=n^{2}\) the size of the input data, then \(I_{ij}\) is the index set for entries mapped from \(X_{ij}\) \[I_{ij}=\{(i_{1}-1)n+i_{2}\mid i_{1}\in\{(i-1)\frac{n}{k}+1,\ldots,i\frac{n}{k} -1\},i_{2}\in\{(j-1)\frac{n}{k}+1,\ldots,j\frac{n}{k}-1\}\}.\] We denote the collection of index sets for the partition as \(\mathcal{I}:=\{I_{i,j}\}_{\forall i,j\in[k]}\). To limit the number of pixels selected from each box for each category we add the following constraints \[\lfloor\frac{\aleph_{0}}{k^{2}}\rfloor\leq\sum_{i\in I_{t}}y_{i}\leq\lceil \frac{\aleph_{0}}{k^{2}}\rceil,\forall I_{t}\in\mathcal{I}, \tag{3}\] The constraint (3) forces the pixels to spread evenly between all partitions, while allowing some to contain one more selected pixel for each category. To illustrate on the impact on the distribution constraints, Fig. 2 compares the selected pixels for MNIST with \(k\in\{1,2\}\). Compared to the result without distribution constraints (equivalent to \(k=1\)), pixels selected with \(k=2\) are more scattered over the whole image and are more likely to identify generalizable distinguishing features of the full input data, assuming the dataset has been pre-processed for unused areas of the images matrices. ### Controlling the number of selected features Repeatedly solving the OSIF problem (2) for each label, _i.e.,_\(i\in\{0,\ldots,9\}\), and taking the union of all selected pixels does not give us full control of the total number of selected pixels. Specifically, some of the pixels can be selected by the OSIF problem for multiple classes, resulting in fewer combined pixels (the union of selected subsets) than an initial target. Therefore, we present an approach to control the number of selected inputs, which we use in the proposed MILP-based feature selection algorithm. The main idea is to allow freedom Figure 2: Optimal input features for MNIST (\(\aleph=50\)) over features already selected by previous classes in the current OSIF problem and adjust the cardinality constraint (2h) to \[\sum_{i\in[n_{1}]\setminus J}y_{i}\leq\aleph_{0}, \tag{4}\] where \(J\) is the index set for input features selected by previous models. Similarly, constraints (3) are adjusted as \[\lfloor\frac{\aleph_{0}}{k^{2}}\rfloor\leq\sum_{i\in I_{t}\setminus J}y_{i}\leq \lceil\frac{\aleph_{0}}{k^{2}}\rceil,\forall I_{t}\in\mathcal{I}. \tag{5}\] Finally, we formulate the OSIF problem with input distribution and total number control as \[OSIF(\mathcal{M},i,\aleph_{0},\mathcal{I},J)=\text{argmax }\{x_{i}^{L}\ \mid(\ref{eq:2b})-(\ref{eq:2g}),(\ref{eq:4}),(\ref{eq:5})\}. \tag{6}\] Based on the described techniques, we introduce the input feature selection algorithm, which is presented as pseudo code in Alg. 1. ``` Data: the number of features \(\aleph\), a trained DNN \(\mathcal{M}\), a matrix partition set \(\mathcal{I}\), class set \(\mathcal{C}\); Input:\(J\leftarrow\emptyset\); Output: Index set \(J\); \(\aleph_{0}\leftarrow\aleph/10\); for\(i\in\mathcal{C}\)do \(x\gets OSIF(\mathcal{M},i,\aleph_{0},\mathcal{I},J)\) ; # Eq. (7) \(J\gets J\cup\{s\mid x_{s}^{1}=0,s\in[n_{1}]\}\); end for ``` **Algorithm 1**MILP-based feature selection (MILP-based selection) ## 3 Computational results In this paper, we focus on image classification problems for the MNIST [6] and FashionMNIST [34] datasets. Both datasets consist of a training set of 60,000 examples and a test set of 10,000 examples. Each sample image in both datasets is a \(28\times 28\) grayscale image associated with labels from 10 classes. MNIST is the dataset of handwritten single digits between 0 and 9. FashionMNIST is a dataset of Zalando's article images with 10 categories of clothing. There is one fundamental difference between the two data sets, besides FashionMNIST being a somewhat more challenging data set for classification. In MNIST there are significantly more pixels that do not change in any of the training images, or only change in a few images, compared to FashionMNIST. The presence of such "dead" pixels is an important consideration for input feature selection. Image preprocessing and training DNNs are implemented in PyTorch [24], and the MILP problems are modeled and solved by Gurobi through the Python API [12]. We trained each DNN with 2 hidden layers of the same size. ### Accuracy of DNNs with sparse input features The goal is to illustrate that Alg. 1 can successfully identify low-dimensional salient input features. We chose to focus on small DNNs, as DNN \(2\times 20\) can already achieve an accuracy of 95.7% (resp. 86.3%) for MNIST (resp. FashionMNIST) and larger DNNs did not give clear improvements for the input selection. For such models, the computational cost of solving the MILPs is low1. Footnote 1: On a laptop with a 10-core CPU, Gurobi can solve instances with \(\aleph=100\) and a DNN of \(2\times 20\) under 15 seconds. However, previous research [1, 29] has shown that significant speed-ups can be obtained by using a more advanced MILP approach. Table 1 and Table 2 present the accuracies of DNNs with sparse input features on MNIST and FashionMNIST. It is possible to obtain a much higher accuracy by considering a more moderate input reduction (about 0.5% accuracy drop with 200 -300 inputs), but this defeats the idea of finding low dimensional salient features. For grayscale input image of \(28\times 28\), we select at most 15% input features and present the results with \(\aleph\in\{50,100\}\). #### 3.1.1 MILP-based feature selection Table 1 compares the accuracy of classification models with different architectures, _i.e.,_ with \(2\times 20\) vs. with \(2\times 40\). We select sparse input features by Alg. 1 with OSIF models \(2\times 10\) and \(2\times 20\). Since the distribution constraints are supposed to force at least one pixel selected in each submatrix, we select partition number \(k\in\{1,2\}\) and \(k\in\{1,2,3\}\) for instances with \(\aleph=50\) and \(\aleph=100\) respectively. First, we investigate the effect of the distribution constraints. Table 1 and Table 2 both show that the accuracy increases when adding the distribution constraints (_i.e._, from \(k=1\) to \(k=2\)) for \(\aleph\in\{50,100\}\). However, the distribution constraints become less important as the number of selected features \(\aleph\) increases; the best \(k\) for MNIST and FashionMNIST varies for \(\aleph=100\) (noting that the choice of \(k\) also affects accuracy less). For MNIST with \(\aleph=100\), the accuracy of instances drops slightly as the \(k\) increases from 2 to 3, while \begin{table} \begin{tabular}{c|c c c|c c c} \multicolumn{8}{c}{DNNs of \(2\times 20\) hidden layers} \\ \hline \(\aleph\) & OSIF Model & \(k\) & Acc. & OSIF Model & \(k\) & Acc. \\ \hline \multirow{2}{*}{50} & \(2\times 10\) & 1 & 80.6\% & \(2\times 20\) & 1 & 80.5\% \\ & \(2\times 10\) & 2 & **85.3\%** & \(2\times 20\) & 2 & **86.6\%** \\ \hline \multirow{2}{*}{100} & \(2\times 10\) & 1 & 88.8\% & \(2\times 20\) & 1 & 89.0\% \\ & \(2\times 10\) & 2 & **91.2\%** & \(2\times 20\) & 2 & **90.6\%** \\ & \(2\times 10\) & 3 & 89.3\% & \(2\times 20\) & 3 & 89.3\% \\ \hline \multicolumn{8}{c}{DNNs of \(2\times 40\) hidden layers} \\ \hline \(\aleph\) & OSIF Model & \(k\) & Acc. & OSIF Model & \(k\) & Acc. \\ \hline \multirow{2}{*}{50} & \(2\times 10\) & 1 & 83.1\% & \(2\times 20\) & 1 & 82.7\% \\ & \(2\times 10\) & 2 & **87.6\%** & \(2\times 20\) & 2 & **89.2\%** \\ \hline \multirow{2}{*}{100} & \(2\times 10\) & 1 & 91.4\% & \(2\times 20\) & 1 & 91.4\% \\ & \(2\times 10\) & 2 & **93.4\%** & \(2\times 20\) & 2 & **92.9\%** \\ \multicolumn{8}{c}{} \\ \(2\times 10\) & 3 & 92.3\% & \(2\times 20\) & 3 & 91.8\% \\ \hline \end{tabular} \end{table} Table 1: Accuracy of DNNs of different architectures with sparse input features selected by Alg. 1 on MNIST using input features selected with \(k=3\) leads to slightly higher accuracy for FashionMNIST with \(\aleph=100\). One reason behind this difference could be that input pixels of MNIST and FashionMNIST are activated in different patterns. In MNIST, there are more active pixels in the center, and the peripheral pixels stay inactive over the training data. Given the distribution constraints with \(k=2\) (see Fig. 2(b)), the selected pixels stay away from the peripheral area. However, when \(k=3\) (see Fig. 2(c)), more pixels are forced to be chosen from the upper right-hand and bottom left-hand corners. In contrast, the active pixels are more evenly spread across the full input in FashionMNIST. Hence, as \(k\) increases (see Fig. 2(e) and Fig. 2(f)), the active pixels remain well-covered by the evenly scattered selected pixels. Next, Table 1 and Table 2 also compare OSIF using different DNN architectures, _i.e._, \(2\times 10\) and \(2\times 20\). The accuracy is 94% (resp. 84%) for the former and 96% (resp. 86%) for the latter for MNIST (resp. FashionMNIST). The results show that even using a simple DNN for feature selection using OSIF can produce feature sets that achieve good accuracy, when appropriately large classification models are trained on the selected features. For MNIST (see Table 1), the former model has accuracy at most 2 points worse than the latter model when \(\aleph=50\). When \(\aleph=100\), both models achieve similar levels of performance. As for Fashion, the difference is at most 1 point for \(\aleph\in\{50,100\}\). Hence, we cannot observe a clear difference between the two OSIF DNN models in terms of feature selection quality. Finally, we would like to make a brief remark on the improvement in accuracy by increasing the size of the DNNs for classification, _i.e.,_ from \(2\times 20\) to \(2\times 40\), for both MNIST and FashionMNIST. Unsurprisingly, using larger DNNs results in overall higher accuracy. More importantly, the performance of the proposed input feature selection seems to be robust toward the final architecture. For both architectures, we observe a similarly slight reduction in classification accuracy related to the reduction in number of input features (pixels). \begin{table} \begin{tabular}{c|c c c|c c c} \multicolumn{8}{c}{DNNs of \(2\times 20\) hidden layers} \\ \hline \(\aleph\) & OSIF Model & \(k\) & Acc. & OSIF Model & \(k\) & Acc. \\ \hline \multirow{2}{*}{50} & \(2\times 10\) & 1 & 76.6\% & \(2\times 20\) & 1 & 77.2\% \\ & \(2\times 10\) & 2 & **77.0\%** & \(2\times 20\) & 2 & **77.9\%** \\ \hline \multirow{2}{*}{100} & \(2\times 10\) & 1 & 81.1\% & \(2\times 20\) & 1 & 81.3\% \\ & \(2\times 10\) & 2 & **82.3\%** & \(2\times 20\) & 2 & 81.8\% \\ & \(2\times 10\) & 3 & 82.2\% & \(2\times 20\) & 3 & **82.1\%** \\ \hline \multicolumn{8}{c}{DNNs of \(2\times 40\) hidden layers} \\ \hline \(\aleph\) & OSIF Model & \(k\) & Acc. & OSIF Model & \(k\) & Acc. \\ \hline \multirow{2}{*}{50} & \(2\times 10\) & 1 & 78.3\% & \(2\times 20\) & 1 & 78.6\% \\ & \(2\times 10\) & 2 & **79.4\%** & \(2\times 20\) & 2 & **79.6\%** \\ \hline \multirow{2}{*}{100} & \(2\times 10\) & 1 & 82.4\% & \(2\times 20\) & 1 & 82.6\% \\ & \(2\times 10\) & 2 & 83.1\% & \(2\times 20\) & 2 & 83.2\% \\ \multicolumn{8}{c}{\(2\times 10\)} & \(3\) & **83.7\%** & \(2\times 20\) & 3 & **84.0\%** \\ \hline \end{tabular} \end{table} Table 2: Accuracy of DNNs of different architectures with sparse input features selected by Alg. 1 on FashionMNIST #### 3.1.2 Comparisons between feature selection approaches In this section, we compare the performance of the MILP-based features-selection approach (_i.e.,_ Alg. 1) to some other simple feature-selection approaches. The other feature selection techniques considered in the comparison are random feature selection, data-based feature selection, and DNN weights-based. In the following paragraphs, we briefly describe the feature selection algorithms that are used as reference points for the comparison. The random feature selection uniformly samples a subset of \(\aleph\) input features, and we present the average accuracy with the standard deviation over 5 DNNs trained with input features independently selected by this approach. This approach is included, as it is the simplest approach to down-sample the full input. The data-based feature selection is conducted in the following way: i) we first calculate the mean value of each input feature over the whole train dataset; ii) the features with the largest \(\aleph\) mean are selected. The motivation behind this simple heuristic is that it selects the pixels that are most strongly colored over the training data, _i.e._, the strongest signals. For example, selecting a pixel uncolored in all images of the training data does not make sense as that input does not contain any information for the training data. In the DNN weights-based approach, we use the same DNN models as we use in the MILP-based selection, but the inputs are now selected based on the weights of the inputs. For each input, we sum up the absolute values of all the weights from the input to the nodes in the consecutive layer and select the ones with the largest sum. This can be seen as a form of pruning of inputs, and the motivation is that inputs with small, or almost zero, weights should be less important as these inputs have less impact in the DNN. In Table 3, we compare MILP-based feature selection (_i.e.,_ Alg. 1) with random selection, data-based selection, and weights-based selection. The figure for Alg. 1 has the best result from Table 1, where the accuracy of DNNs with sparse input features is only 5 points with \(\aleph=100\) less than the accuracy with full input features. It can be observed that our method has the best overall performance. For MNIST, the random selection has the worst Figure 3: MNIST and FashionMNIST input features selected by Alg. 1 with \(\aleph=100\) and \(k\in\{1,2,3\}\) performance, but the data-based selection and weights-based selection achieve a slightly worse performance than our method. Table 4 compares the performance of features selections on FashionMNIST. The results show a different pattern to MNIST. Our methods still have the best overall performance over different settings by maintaining the accuracy to 84% (resp. 88%) with \(\aleph=100\) for DNNs \(2\times 20\) (resp. \(2\times 40\)), while the accuracy of DNNs with full input features are 86% (resp. 88%). While delivering the worst performance on MNIST, random selection has a very close performance to our method on FashionMNIST. The weights-based selection still lies in third overall, while the data-based selection is much worse than the other three methods (_e.g.,_ 58% for the DNN \(2\times 20\) with \(\aleph=100\) and 62% for the DNN \(2\times 40\) with \(\aleph=100\)). The weights-based selection performs decently on both data sets compared to random and data-based selection. However, based on the results it is clear that MILP-based selection (_i.e.,_ Alg. 1) can extract more knowledge from the DNN model regarding the importance of inputs compared to simply analyzing the weights. The overall performance of MILP-based selection is more stable than other feature selection methods on both datasets. ### Robustness to adversarial inputs The robustness of a trained DNN classifier can also be analyzed using MILP, _e.g.,_ in verification or finding adversarial input. We use the minimal distorted adversary as a measure of model robustness \(x\) under \(l_{\infty}\) norm [10]. For a given image \(x_{\text{image}}\), the minimal adversary problem [29] can be formulated as \[\begin{split}\min\ \epsilon&\\ \text{s.t.(\ref{eq:x_1})-(\ref{eq:x_2})},x_{i}^{I}\leq x_{j}^{L}, ||x^{1}-x_{\text{image}}||_{\infty}\leq\epsilon,\end{split} \tag{7}\] \begin{table} \begin{tabular}{c|c c c c|c} \hline \multirow{2}{*}{DNN} & \multicolumn{5}{c|}{Feature Selection Approaches} & \multirow{2}{*}{\(\aleph=784\)} \\ \cline{2-2} \cline{4-6} & \(\aleph\) & MILP-based & & Random & Data-based & Weights-based \\ \hline \multirow{2}{*}{\(2\times 20\)} & 50 & 77.9\% & \(77.2\pm 0.8\%\) & 49.6\% & 73.8\% & \multirow{2}{*}{86.3\%} \\ & 100 & 82.3\% & \(80.6\pm 1.1\%\) & 58.4\% & 80.3\% & \\ \hline \multirow{2}{*}{\(2\times 40\)} & 50 & 79.6\% & \(78.8\pm 0.4\%\) & 51.6\% & 74.6\% & \multirow{2}{*}{87.5\%} \\ & 100 & 84.0\% & \(82.8\pm 0.4\%\) & 62.3\% & 81.7\% & \\ \hline \end{tabular} \end{table} Table 4: Accuracy of DNNs with sparse input features selected by different methods on FashionMNIST \begin{table} \begin{tabular}{c|c c c c c|c} \hline \multirow{2}{*}{DNN} & \multicolumn{5}{c|}{Feature Selection Approaches} & \multirow{2}{*}{\(\aleph=784\)} \\ \cline{2-2} \cline{4-6} & \(\aleph\) & MILP-based & & Random & Data-based & Weights-based \\ \hline \multirow{2}{*}{\(2\times 20\)} & 50 & 86.6\% & \(77.4\pm 2.2\%\) & 81.3\% & 80.7\% & \multirow{2}{*}{95.7\%} \\ & 100 & 91.2\% & \(86.0\pm 2.5\%\) & 89.2\% & 89.0\% & \\ \hline \multirow{2}{*}{\(2\times 40\)} & 50 & 89.2\% & \(76.0\pm 3.9\%\) & 85.1\% & 83.3\% & \multirow{2}{*}{97.1\%} \\ & 100 & 93.4\% & \(90.6\pm 1.1\%\) & 92.4\% & 91.2\% & \\ \hline \end{tabular} \end{table} Table 3: Accuracy of DNNs with sparse input features selected by different methods on MNIST where \(i\) is the true label of image \(x_{\text{image}}\) and \(j\) is an adversarial label. Simply put, problem (7) finds the smallest perturbation, defined by the \(\ell_{\infty}\) norm, such that the trained DNN erroneously classifies image \(x_{\text{image}}\) as the adversarial label \(j\). We hypothesize that DNNs trained with fewer (well-selected) features are more robust to such attacks, as there are fewer inputs as degrees of freedom. Furthermore, we note that the robustness of smaller DNNs can be analyzed with significantly less computational effort. Table 5 shows the minimal adversarial distance (mean and standard deviation over 100 instances), defined by (7) for DNNs trained on MNIST and FashionMNIST with MILP-based feature selection. The adversaries are generated for the first 100 instances of the respective test datasets, with adversarial labels selected randomly. Furthermore, we report the mean percentage increase, \(\Delta\), in minimal adversarial distance over the 100 instances for the reduced input DNNs compared to the full input. In all cases, reducing the the number of inputs \(\aleph\) results in a more robust classifier. For the \(2\times 40\) DNN trained on FashionMNIST, reducing the number of inputs from 784 to 50 increases the mean minimal adversarial distance by almost 90%, with a loss in accuracy of \(<\)10%. ## 4 Conclusion In the paper, we have presented an MILP-based framework using trained DNNs to extract information about salient input features. The proposed algorithm is able to drastically reduce the size of the input by using the input features that are most important for each category according to the DNN, given a regularization on the input size and spread of selected features. The numerical results show that the proposed algorithm is able to efficiently select a small set of features for which a good prediction accuracy can be obtained. The results also show that the proposed input feature selection can improve the robustness toward adversarial attacks.
2310.04120
A General Approach to Dropout in Quantum Neural Networks
In classical Machine Learning, "overfitting" is the phenomenon occurring when a given model learns the training data excessively well, and it thus performs poorly on unseen data. A commonly employed technique in Machine Learning is the so called "dropout", which prevents computational units from becoming too specialized, hence reducing the risk of overfitting. With the advent of Quantum Neural Networks as learning models, overfitting might soon become an issue, owing to the increasing depth of quantum circuits as well as multiple embedding of classical features, which are employed to give the computational nonlinearity. Here we present a generalized approach to apply the dropout technique in Quantum Neural Network models, defining and analysing different quantum dropout strategies to avoid overfitting and achieve a high level of generalization. Our study allows to envision the power of quantum dropout in enabling generalization, providing useful guidelines on determining the maximal dropout probability for a given model, based on overparametrization theory. It also highlights how quantum dropout does not impact the features of the Quantum Neural Networks model, such as expressibility and entanglement. All these conclusions are supported by extensive numerical simulations, and may pave the way to efficiently employing deep Quantum Machine Learning models based on state-of-the-art Quantum Neural Networks.
Francesco Scala, Andrea Ceschini, Massimo Panella, Dario Gerace
2023-10-06T09:39:30Z
http://arxiv.org/abs/2310.04120v1
# A General Approach to Dropout in Quantum Neural Networks ###### Abstract In classical Machine Learning, "overfitting" is the phenomenon occurring when a given model learns the training data excessively well, and it thus performs poorly on unseen data. A commonly employed technique in Machine Learning is the so called "dropout", which prevents computational units from becoming too specialized, hence reducing the risk of overfitting. With the advent of Quantum Neural Networks as learning models, overfitting might soon become an issue, owing to the increasing depth of quantum circuits as well as multiple embedding of classical features, which are employed to give the computational nonlinearity. Here we present a generalized approach to apply the dropout technique in Quantum Neural Network models, defining and analysing different quantum dropout strategies to avoid overfitting and achieve a high level of generalization. Our study allows to envision the power of quantum dropout in enabling generalization, providing useful guidelines on determining the maximal dropout probability for a given model, based on overparametrization theory. It also highlights how quantum dropout does not impact the features of the Quantum Neural Networks model, such as expressibility and entanglement. All these conclusions are supported by extensive numerical simulations, and may pave the way to efficiently employing deep Quantum Machine Learning models based on state-of-the-art Quantum Neural Networks. ## I Introduction Machine Learning (ML) has been rapidly emerging in the last few years as a most promising computational model to analyze and extract insights from large and complex datasets, improve decision-making processes, and automate different tasks across a wide range of industrial processes [1; 2]. ML models generally require high flexibility, with lots of trainable parameters in order to learn the underlying function in a supervised fashion. However, being able to learn with low in-sample error is not enough: it is also desirable to have a model that is capable of high _generalization_, meaning that it is able to provide good predictions on previously unseen data [3]. Overfitting is a common issue in ML when dealing with highly expressive models [4; 5]. It is a phenomenon that occurs when a model is trained too well on the training data, and as a result, performs poorly on new, unseen data; if the model's performance on the testing set is significantly worse than on the training set, it probably indicates overfitting. This happens because the model has learnt the noise in the training data, rather than the underlying pattern that is generalizable to new data. When a learning model has a high number of parameters relative to the amount of training data, then it is likely to produce overfitting: the model becomes too complex for the amount of data it has been trained on, resulting in a lack of generalization. Deep Neural Networks (DNNs) are powerful neural network models that are able to employ a large number of parameters, which allows to well approximate complex functions and achieve high accuracy in training. However, the high complexity of these models can also lead to overfitting the training data. To mitigate overfitting, regularization methods such as the "dropout" are widely employed in the Deep Learning (DL) community. Dropout is a technique that acts by randomly dropping either neurons or connections within a DNN during the training phase, to block the information flow and prevent units from becoming too specialized, thus reducing the risk of overfitting [6]. Recently, an emerging field combining ML with the principles of quantum computing has been emerging, Quantum machine learning (QML). The goal of QML is to leverage the unique properties of quantum systems, such as superposition and entanglement, to improve the performance of ML and DL algorithms. The potential advantages of QML include faster training times [7], improved generalization [8], and the ability to solve problems that are intractable for classical algorithms [9; 10]. Even though prototypes of real quantum computers are available nowadays, these are still of limited size and strongly affected by noise, posing an additional challenge to researchers in the field and giving rise to the so-called Noisy Intermediate Scale Quantum (NISQ) era [11; 12; 13]. Variational Quantum Algorithms (VQA) [14] are a particular QML approach that demonstrated to be very efficient in dealing with such noisy hardware by exploiting the combination of Parametrized Quantum Circuits (PQC), often referred also as Quantum Neural Networks (QNNs) [15; 16; 17; 18], and an optimization process run on a classical computer [19; 20; 21; 22; 23; 24; 25; 26; 27]. The search for high generalization capability also applies to QNN models [28; 29; 8], in which a quantum model can be trained to analyse classical or quantum data. Since quantum mechanics is intrinsically linear, in order to nonlinearly analyse classical data and enhance the expressive power of QML models, the data re-uploading technique was introduced [30]. Further efforts on the relationship between classical data encoding and generalization performances suggest that overfitting may be caused by the repeated encoding of classical data [28; 29]. Although overfitting may also occur in deep QML models, little has been said about how to practically deal with it, so far. If on the one hand training deep overparametrized quantum models [31; 32; 33; 34] greatly helps to avoid barren plateaus [35; 36; 37; 38; 39; 40], on the other hand, the presence of multiple data re-uploading and redundancy in the parameters may lead to overfitting the data. Some bounds for the scaling of generalization error are given in Ref. [8], but their application to deep QML models has not been explored, yet. Moreover, a recent work [41] reports on the limitations of uniform generalization bounds applied to QML models. The seminal idea of implementing dropout to address the overfitting problem in QNNs is given in Ref. [42], where the presence or absence of a unitary operation is controlled by an ancillary register. More recently, it has been proposed to randomly select and measure one of the qubits and set it aside for a certain number of optimization steps during the training of PQCs [43], which is a sort of dropout technique. Nevertheless, this type of dropout regularization does not seem to solve the overfitting issue. Recently, entangling dropout was proposed to address the problem of overfitting in deep QNNs [44]: some entangling gates (specifically, controlled-NOT ones) are randomly removed during the training phase in order to decrease the expressibility of the QML model. In order to clarify possible misunderstandings on the relationship between the present work with others already present in the literature, let us point out that the terminology 'quantum dropout' has been employed also in the context of Quantum Approximate Optimization Algorithms [45], which consists in a completely different technique. In addition, _classical_ dropout has been used to avoid overfitting in hybrid Quantum Convolutional NNs [46], which is not what we mean as quantum dropout. In this paper, we present the first extensive theoretical assessment of the dropout technique applied to QNNs (i.e., _quantum dropout_) inspired by the comparison between classical and quantum models, in which the role of artificial neurons is implemented by rotation gates, whereas entangling gates work as inter-neural connections. Taking into account this analogy, rotations are randomly removed, in addition to some entangling gates depending on the chosen dropout method. Additionally, starting from the overparametrization theory, we provide guidelines about how to determine the maximal dropout probability for a given model. Our results indicate that all the proposed quantum dropout strategies are effective in mitigating overfitting, especially when parametrized quantum gates are dropped. Moreover, we examine the use of parameter rescaling for quantum dropout strategies, a potentially useful strategy that has not been addressed in the previous literature. Interestingly, unlike classical dropout, quantum dropout does not benefit from parameter rescaling. As a consequence, we analyse the learning performances without it. Finally, we analyse the behaviour of genuine quantum features related to the QNN when quantum dropout is applied. The main conclusion is that quantum dropout does not reduce expressibility [22] or entanglement [47] when the QNN is overparametrized. These findings have significant implications for the effective implementation of deep QNN models and provide a promising foundation for the development of more efficient strategies for QNNs training. The paper is structured as follows. Section II provides a comprehensive overview of QNNs, overparametrization and dropout. Section III presents a general introduction to dropout for QNNs and examines the proposed strategies. Section IV delivers and discusses the main results of numerical experiments on quantum dropout, together with an analysis of entanglement, expressibility and parameters scaling with QNNs. Conclusions are drawn in Sec. V. Lastly, the employed Methods are illustrated in Section VI. ## II Technical background ### Quantum Neural Networks Due to their similarities with classical NNs, PQCs employed within VQAs are often referred to as QNNs [15]. They are composed of a data encoding stage, a parametrized ansatz, and a measurement operation at the end to retrieve the result followed by a classical update of the parameters. Following this scheme, QNNs can learn to perform various tasks, such as regression and classification. Classical patterns \(\mathbf{x}\in\mathbb{R}^{n}\) are encoded into a \(N\) qubit quantum circuit via a quantum feature map, which corresponds to a unitary matrix \(S(\mathbf{x})\) that maps \(\mathbf{x}\) to a \(2^{N}\)-dimensional Hilbert space. The embedding is then followed by the variational ansatz \(W(\mathbf{\theta})\). The latter is composed of layers of parametrized single-qubit rotation gates and two-qubits entangling gates: rotation gates are employed to manipulate the quantum state by addressing single qubits independently, while layers of entangling gates allow for the creation of multipartite entanglement [48] throughout the whole state vector (see, e.g., Fig. 8 as an example). This can be summarized as \[W(\mathbf{\theta})S(\mathbf{x})|\mathbf{0}\rangle=W(\mathbf{\theta})|\phi(\mathbf{x})\rangle\,, \tag{1}\] where \(|\mathbf{0}\rangle=|0\rangle^{\otimes N}\). Data encoding and ansatz unitaries may be applied repeatedly in a data re-uploading fashion to realise more expressive models [30]. In this regard, given \(L\) data re-uploading layers, the final unitary describing the evolu tion of the quantum state is formally expressed as \[U_{L}(\mathbf{x};\mathbf{\theta})=\prod_{l=L}^{1}U_{l}(\mathbf{x};\mathbf{\theta})=\prod_{ l=L}^{1}W(\mathbf{\theta}^{(l)})S(\mathbf{x})\,, \tag{2}\] where \(\mathbf{\theta}^{(l)}\) refers to the parameters in the \(l\)-th layer of the quantum circuit. At the end of the quantum circuit, a measurement with respect to an observable is performed to retrieve the outcome of the algorithm: \[f_{L}(\mathbf{x};\mathbf{\theta})=\langle\mathbf{0}|\,U_{L}^{\dagger}(\mathbf{x};\bm {\theta})\hat{O}U_{L}(\mathbf{x};\mathbf{\theta})|\mathbf{0}\rangle\,, \tag{3}\] where \(\hat{O}\) is the measured operator. In this work, we measure in the Pauli-Z basis one or more qubits depending on the task. When employed in VQAs, from \(f_{L}(\mathbf{x};\mathbf{\theta})\) a classical optimizer estimates the cost function to update the parameters. The data re-uploading strategy affects the ability of QNNs to approximate functions. In fact, as shown in Ref. [16], a quantum model \(f_{L}(\mathbf{x};\mathbf{\theta})\) can be expressed as a partial Fourier series in the data, and by repeating the data encoding stage quantum models can access a wider range of frequency spectra growing as \(L\). On the other hand, if QNN depth increases too much, this procedure will be prone to overfitting of the training data due to their enhanced expressiveness [33]. The issue of overfitting is discussed in Sec. II.2, where we also discuss its link to overparametrization [32]. ### Overparametrization and overfitting In classical ML, a model is said to be overparametrized when the number of trainable parameters, \(M\), exceeds the number of training samples. This is often the case for deep learning models, where the number of trainable parameters is usually way larger than the size of the training dataset. Differently, in QML such a term has taken slightly different meanings depending on the context in which it has been employed [31; 32; 33; 34]. In this work, we apply the definitions provided in Refs. [31; 32]: a QNN is said to be overparametrized when the _average parameter dimension_ (over the whole training set) saturates its maximum achievable value, \(D_{max}=2^{\mathrm{N}+1}-2\). This implies that the QNN is fully capable of representing every possible quantum state on \(N\) qubits. As a consequence, adding more trainable parameters will only increase the _redundancy_. In App. B we provide the definitions of the mentioned quantities and we verify that we are working Figure 1: Illustration of the similarities between classical and quantum NNs. Quantum rotational gates take the role of artificial neurons, and entangling gates work as connections between them. Neurons process the incoming information (from a previous layer of neurons/quantum circuit) and then send it to all the neurons of the next layers throughout their connections; for the QNN, this is just a schematic representation; in fact, if the state is entangled, a rotation on one qubit will affect also others. In the classical NN, when one neuron is dropped, it is removed together with all its connections, whereas the canonical quantum dropout leaves some entangling gates unaltered, to avoid a strong impact on the quantum learning process. in the overparametrized regime, indeed. In order to strengthen the presence of overfitting, we also employ QNNs with multiple data encoding, both in parallel on different qubits and with the data re-uploading technique [28; 29; 8], respectively. ### Dropout Dropout, which was first proposed in Ref. [6], is a widespread technique to prevent overfitting in classical DL models. It consists in temporarily removing neurons from the network, along with all their incoming and outgoing connections, during the training phase. Each unit is dropped randomly and independently, usually according to a fixed probability \(p\). Dropout is repeated at each iteration and corresponds to sampling a thinned network from the original one. Thus, training a \(n\)-units neural network with dropout can be seen as training a collection of \(2^{n}\) thinned networks with extensive weight sharing, where each thinned network gets trained very rarely, namely with probability equal to \(1/2^{n}\). At test time, one employs the original neural network with scaled weights with respect to the ones obtained from the training. If a unit is dropped with probability \(p\) during training, the final weight of that unit is multiplied by \(1-p\) at test time. This amounts to averaging the behaviour of \(2^{n}\) thinned neural networks when using the original one. In addition to the characterization of dropout as an ensemble technique, one can also try to catch its functioning by analysing what happens during training. When applying dropout the information flow is hindered, hence each hidden neuron is forced to learn to work with a randomly chosen group of other units. This should make each unit more robust and help it develop useful features on its own without relying on the surrounding units to correct its mistakes (co-adapting) and, on the other hand, it should also avoid the hyperspecialization of the units. However, the hidden units within a layer will still learn to do different things from each other. ## III Quantum dropout ### General approach Since overfitting usually appears when a model is overparametrized, we present the quantum dropout technique in the context of overparametrized QNNs. By comparing classical and quantum NNs, one can associate quantum single-qubit rotation gates to neurons and entangling gates to connections between neurons, see Fig 1. In this view, quantum dropout consists in randomly removing parametrized unitaries in an overparametrized QNN during the training phase. More precisely, herein dropping a gate means that we substitute it with an identity gate. We argue that it is crucial to drop parametrized gates (or groups of gates including some parametrized ones) in order to fully benefit from dropout, since it prevents the QNN from becoming too reliant on specific parameters, thereby reducing the risk of overfitting. By randomly removing these parametrized unitaries, the QNN is forced to rely on a wider range of parameters and learn more robust features. The general scheme of quantum dropout at each training step, inspired by [44], can be roughly summarized as follows: 1. Randomly select QNN layers according to the layer dropout ratio (\(p_{L}\)), i.e., the probability of choosing a layer to which the dropout is applied. 2. Remove (groups of) gates in the selected layers based on a probability defined as the gate dropout ratio (\(p_{G}\)). This probability determines the likelihood of a quantum gate of being dropped. 3. Compute the gradient of the cost function for the dropout circuit and update the parameters accordingly. 4. Iterate the above procedure until the termination criterion is met. Usually, in classical ML one applies dropout only to certain selected layers with probability \(p\). The selection of the layers is often related to the particular design of the NN under usage, since it is very common to build part of the model with specific purposes. Conversely, in a quantum setting we have a QNN composed of identical repeated layers and this is why we randomly sample the layers where to apply quantum dropout. Given the layer dropout rate \(p_{L}\) and the gate dropout rate \(p_{G}\), the probability \(p\) that a (group of) gate(s) is dropped in a layer can be calculated with the conditioned probability law: \[p=p(A\cap B)=p(A|B)p(B)=p_{G}p_{L} \tag{4}\] where \(A\) is the selection of a specific (group of) gate(s) and \(B\) is the selection of a specific layer. Before applying quantum dropout, one must always consider how many parameters are needed to preserve the full capabilities of the QNN under usage, in order to choose appropriate values of \(p\). If the probability of dropout is set too high, the QNN will leave the overparametrization regime, and this will have direct consequences on its expressibility and entanglement produced. In this work, as a rule of thumb for checking the maximal allowed number of dropped parameters we employ the following inequality: \[M_{max}^{drop}\leq M-D_{max} \tag{5}\] where \(M\) is the total number of parameters, \(D_{max}\) is the maximal parameter dimension (setting the minimum number of parameters required to fully explore the Hilbert space as explained in Sec. II.2). One can then obtain \(p_{max}^{drop}=M_{max}^{drop}/M\). The reader can find more details in Appendix B; we show the effects of dropout on genuine quantum feature in Sec. IV.4 and in Appendices D and E. ### Proposed strategies The quantum dropout scheme described above can be applied in different fashions depending on the dropped unitary operations. We hereby present at first the quantum dropout approach that embodies the more intuitive and practical essence of dropout, then we define alternative quantum dropout techniques. All these are summarized in Tab. 1. Following the classical concept of dropout, a certain neuron (single-qubit rotation) is selected according to the drop probability \(p\) and it is removed together with all its connections (entangling gates). Hence, we define the _canonical quantum dropout_ where all the previous entangling gates, having the single-rotated qubit as a target, and all the next ones, having it as a control, are dropped. To exactly reproduce the classical dropout in a quantum circuit, one should remove all the entangling gates connecting the rotated qubit with the other qubits. However, this is not practical in a quantum setting, since it would deeply change the entire quantum state. For this reason, we preferred to define the canonical dropout rather than an exact quantum counterpart of classical dropout. Besides, quantum dropout with PQC as QNNs will always be intrinsically different from its classical counterpart, because we are acting multiple times on the same qubits, evolving the quantum state. This implies a temporal connection that is never removed by the dropout, even if we were to drop all the entangling gates liked to a single rotation. Exploiting this fact, one can think of a purely quantum _partial dropout_, which has no classical analogue. More in detail, one may want to remove only the entangling gates previous (_canonical-backward_) or subsequent (_canonical-forward_) to the selected rotation gate.1 Footnote 1: Since these two are conceptually equivalent, in this study we will only work with canonical-forward. It is also possible to only drop single-qubit rotations or entangling gates: we call these approaches _rotation dropout_ and _entangling dropout_, respectively. The latter approach was previously defined in [44] and is included in our quantum dropout framework as a special case. In fact, it can be considered as an improper dropout since it fits more the definition of a randomized quantum version of the pruning technique [49] (applied only during training), which consists in removing connections in a NN according to their level of importance. In addition, the employment of CNOTs as entangling gates would result in having equally-weighted connections, whereas controlled rotation would allow for different weighting of the connections. For this reason, in this work, we employ two different QNN models: the first one, proposed in [44], has CNOTs as entangling gates and is utilised for regression, while for classification we choose a QNN with parametrized entangling gates. More details about the QNN structures can be found in Appendix A. Ultimately, one can remove both the rotation and the entangling gates independently after having selected the layer (_independent dropout_). In this last case, we will have two separate drop gate ratios, i.e. one for the rotation (\(p_{R}\)) and one for the entangling (\(p_{E}\)) gates; the dropout probability applied to each type of gate will be \(p=p_{L}p_{G}\), where \(p_{G}=p_{R}\) for the rotation gates and \(p_{G}=p_{E}\) for the entangling gates. ## IV Results In this Section, we apply quantum dropout in all its different flavours to avoid overfitting in both regression and classification tasks. We performed three different experiments to evaluate the goodness of the proposed quantum dropout methods in a noiseless simulation setting. The first two experiments concerned regression tasks, while the third was about a binary classification problem. Interestingly, unlike classical dropout, quantum dropout does not benefit from parameter rescaling and, consequently, we analyse the learning performances without it. In fact, in Sec. IV.5 we show evidence that scaling the parameters at best gives the same performance of not rescaling at all. More in detail, we address the regression on a dataset \begin{table} \begin{tabular}{l c c l} \hline \hline Name & Gates to drop & \(p_{G}\) & Rule \\ \hline Canonical & \(R_{G}\), \(E_{G}\) & \(p_{R}\) & Drop a single \(R_{G}\), all previous \(E_{G}\)s with target on \(R_{G}\), all next \(E_{G}\)s with control on \(R_{G}\). \\ Canonical-forward & \(R_{G}\), \(E_{G}\) & \(p_{R}\) & Drop a single \(R_{G}\), all next \(E_{G}\)s with control on \(R_{G}\). \\ Rotation & \(R_{G}\) & \(p_{R}\) & Drop a single \(R_{G}\). \\ Entangling & \(E_{G}\) & \(p_{E}\) & Drop a single \(E_{G}\). \\ Independent & \(R_{G}\), \(E_{G}\) & \(p_{R}\), \(p_{E}\) & Drop a single \(R_{G}\) and a single \(E_{G}\). \\ \hline \hline \end{tabular} \end{table} Table 1: Quantum dropout strategies. Rotation and entangling gates are represented as \(R_{G}\) and \(E_{G}\), respectively. \(p_{G}\) is employed together with \(p_{L}\) to obtain the dropping probability \(p=p_{G}p_{L}\). generated by a sinusoidal function, which is a common and straightforward benchmark for neural models and then we test the performances of the dropout on another synthetic dataset generated by the modulus function (reported in Appendix C). Since our general approach includes as a subcase the entangling dropout [44], a performance evaluation of general quantum dropout on the same datasets was appropriate. Finally, in the third experiment, we tackle a more challenging problem related to the binary classification of the Moon dataset retrieved on the scikit-learn library, which has a non-trivial geometric arrangement of data points. We want to investigate the behaviour of different dropout strategies in a non-linear classification setting to demonstrate the robustness of our proposed methodologies experimentally. Afterwards, we discuss the relationship between quantum dropout and features of the QNN circuit like expressibility and entanglement [22; 27]. ### Regression of sinusoidal function In this experiment, we tackle in detail the sin regression problem. The analytical expression of the function under exam is the following: \[y=\sin(\pi x)+\epsilon\,, \tag{6}\] where \(x\in[-1,1]\) and \(\epsilon\) is an additive white Gaussian noise with amplitude equal to \(0.4\), zero mean and a standard deviation of \(0.5\). An extensive analysis of the results is illustrated in Fig. 3; for each dropout model, the best configuration of \(p_{L}\), \(p_{R}\) and \(p_{E}\) is reported, where \(p_{L}\) refers to the layer dropout rate, \(p_{R}\) to the rotation dropout rate and \(p_{E}\) to the entangling dropout rate. The Mean Squared Error (MSE) obtained by the model without dropout is almost \(0\) in the training set but it is remarkably high in the test set, which is a clear indicator of overfitting. On the contrary, all quantum dropout techniques do not exhibit signs of overfitting in their bests configurations: the errors on the training set and test set are comparable to each other and way lower than the test error of the overfitting model without dropout, highlighting the effectiveness of these techniques. Independent dropout on average (over \(10\) different runs) achieves the best MSE of \(0.032\) on the test set, followed by canonical-forward and rotation dropout, and is \(71.4\%\) lower than the one achieved by the model without dropout. It is not surprising that independent obtains the best performances, since the three hyperparameters (\(p_{L}\), \(p_{R}\), \(p_{E}\)) search is more exhaustive (and costly) with respect to the other methods leading to a higher-quality dropout. The test error obtained by entangling dropout is \(25.3\%\) higher than the one of independent, although it is still comparable. However, the standard deviation of entangling dropout in the test set is large and suggests that this approach may be less robust to random initializations. This may be related to the fact that these gates are not parametrized, in fact also canonical dropout has a considerable standard deviation. For the regression QNN eq. (5) allows to determine that about \(60\%\) of the parameters can be removed while still having an over-parametrized model. Considering the optimal drop ratios in Fig. 3, the best dropout probabilities are within the range of \(5\%\) to \(8\%\). Such a result indicates that small dropout ratios are enough for quantum circuits to avoid overfitting while maintaining the ability to effectively learn from the data. Taking as a reference the independent dropout, namely the best-performing approach among the ones proposed, a further analysis of the results is reported in Fig. 2. Fig. 2a shows how independent dropout actually mitigates overfitting and makes the prediction of the underlying sin function way smoother than the one performed by the overfitting QNN, which in contrast is highly affected by noise. The benefits of applying dropout are also demonstrated by the convergence of the train and test losses in Fig. 2b: the ones of the model with dropout almost converge to the same MSE value after \(1000\) epochs, whereas the train and test losses of the model without dropout exhibit clear signs of overfitting and have very distinct average MSE values of \(1.32\cdot 10^{-5}\) and \(1.11\cdot 10^{-1}\) respectively, which differs by four orders of magnitude. Moreover, the test loss of the model without dropout has also a higher standard deviation compared to the other cases, meaning that it is more sensitive to changes in the input data and may indicate instability in the model. ### Regression of the modulus function Here, we briefly discuss the module regression problem while relegating a more detailed analysis with the relative plots in Appendix C. The function under study is: \[y=|x|-\frac{1}{2}+\epsilon\,, \tag{7}\] where \(x\in\mathbb{R}\) and \(\epsilon\) is still an additive white Gaussian noise with amplitude equal to \(0.3\), zero mean and a standard deviation of \(0.5\). Using quantum dropout helps to achieve better generalization also for this kind of function. In this case, a pronounced standard deviation in the generalization performances is obtained, and this can be attributed to the Gibbs phenomenon [50]. The model without dropout overfits with almost \(0\) training MSE and high test error. Conversely, all quantum dropout techniques demonstrate low and comparable errors, both on the training and test sets, indicating their effectiveness in preventing overfitting compared to the model without dropout. The Independent dropout model still gets the best performance in terms of test error. The standard deviation of all quantum dropout models is lower than the standard deviation of the overfitted model after \(10\) runs, indicating that the quantum dropout models have better generalization performance. As in the sin function experiment, also here the gate drop ratios are very low, typically between \(6\%\) and \(16\%\). ### Classification We now address a more difficult problem, i.e., the binary classification task of the Moons dataset with the addition of a further Gaussian noise with amplitude equal to 1, zero mean and a standard deviation of 0.2. In order to show that quantum dropout actually works for different QNN models, we hereby make use of a different QNN architecture with respect to the one employed for the regression, described in detail in Appendix A. The main difference lies in the kind of entangling gates employed, which are parametrized and consequently, all the dropout strategies removing entangling gates will directly have an impact on trainable parameters. In addition, this architecture is composed of 20 layers where only one variational sublayer per each data-reuploading is present. This last artifice helps achieve a high level of overfitting with a non-trivial dataset. Following the same steps done in the previous sections, we report in Fig. 5 a comparison of the generalisation capabilities of the model trained with all the dropout strategies in their optimal configuration (best \(p_{L}\), \(p_{R}\), \(p_{E}\) are displayed) with respect to the standard training procedure. In addition, we show the different performances, expressed as Categorical Cross Entropy (CCE) and Accuracy, between the QNN with the best quantum dropout technique and the one without dropout in Fig. 4. Similarly to the regression case, Fig. 5 reveals that dropout succeeds in removing overfitting in all the employed fashions with comparable performances. Moreover, in this case, we notice good convergence for all the strategies, witnessed by small standard deviation values with respect to the mean value of the loss function (this holds also for accuracy, as a consequence). It has to be noticed that the canonical-forward technique does coincide with the canonical one since the QNN employed in this problem has only one variational sublayer per each layer (see Appendix A) and this is why in Fig. 5 we only show canonical dropout performance. Interestingly, the optimal drop probabilities here are smaller than in regression problems, ranging from 1% to 6%. Independent dropout again achieves the best performance, diminishing the test loss of about 32.5%, whereas the entangling dropout is slightly worse than the others being the only one reducing the test CCE less than 30%. The hierarchies Figure 3: **Regression** Bar chart comparing the final average performances of all the dropout strategies on the sin dataset with their respective optimal hyperparameters. The standard deviation is taken over 10 different runs. Figure 2: **Regression** The plots illustrate the performances of an overparametrized model employed in a regression task of the sin function. **(a)** The model without dropout overfits the noisy data by predicting exactly each of them, whereas with dropout the approximate function is smoother. **(b)** The trend of the loss function when training without dropout shows an increasing prediction error typical of an overfitting model, while with dropout the prediction error does not increase. The standard deviation over 10 different runs is shown as a shadow. are the same also for what concerns the standard deviation: independent dropout has the tiniest one, whereas entangling presents the largest standard deviation. We stress that if on the one hand independent dropout is able to achieve the best performances, on the other hand, it requires a much more intense parameter search with respect to the other strategies. From Figs. 3(a)-b one can really understand the effectiveness of quantum dropout in removing overfitting: both the test loss and Accuracy smoothly follow the trend of their respective training counterpart with low standard deviation. Conversely, in the case without dropout the test loss and Accuracy depart from the training ones after 1000 epochs with high standard deviation. Here we report these trends only for the overall best technique, but all the other dropout configurations (in terms of \(p_{L}\) and \(p_{G}\)) displayed in Fig. 5 show completely analogous behaviours. ### Entanglement and expressibility The expressibility and entangling capability [22; 27] of a QNN do depend on the structure of the chosen ansatz and, in particular, this is strongly related to the kind and number of rotations and entangling gates of which it is composed. The here proposed quantum dropout techniques randomly remove some gates in the circuit during the training process of overparametrized QNNs. In the following, differently from what conjectured in [44], we show that expressibility and entangling capability are not affected by dropout even for high values of dropout ratio (see Figs. 6) for the overparametrized QNN proposed in their work and employed in the regression tasks here. This ensures that expressibility and entanglement are not lowered also in the regime with small dropout rates, i.e. removing few gates, in which one usually operates. Expressibility measures the circuit's ability to produce states that are well representative of the Hilbert space and to quantify it we follow the guidelines drawn in [22]. The entangling capability, instead, is quantified as the mean concentric entanglement [47] produced, which is a measure of multipartite entanglement directly computed on a quantum computer. In addition, for the sake of completeness, we report in the appendix a similar analysis for the other QNN used in the classification Figure 4: **Classification** The plots illustrate the performances of the second QNN employed in a classification task of the Moons dataset. **(a)** Standard training shows an increasing prediction error, expressed as CCE, typical of an overfitting model, while with dropout the prediction error keeps decreasing. **(b)** A similar behaviour can be found in the accuracy of the predictions. The standard deviation over 10 different runs is shown as a shadow. Figure 5: **Classification** Bar chart comparing the final average performances of all the dropout strategies on the Moons dataset with their respective optimal hyperparameters. The standard deviation is taken over 10 different runs. problem confirming the same findings also for another QNN architecture. The QNN model we are working with has elevated expressibility (see Appendix D) and in addition is highly overparametrized (see Appendix B). In this regard, applying quantum dropout around the optimal operating regime is very unlikely to reduce the expressibility directly. In fact, the drop ratios in all the possible dropout strategies presented in Fig. 3 are not very high, implying that only a few gates are dropped in each epoch of the training phase. On the opposite, to decrease the expressibility one should affect the parameter dimension \(D\) (defined in Sec. II.2), making the QNN underparametrized, but using eq. (5) with \(M=150\), more than \(M-D_{max}=88\) rotation gates should be removed, recalling that \(D_{max}=2^{N+1}-2\) is the maximal parameter dimension and accounts for the minimal number of parameters required to explore the whole Hilbert space. This would correspond to about \(60\%\) of the parameters of the model with \(10\) layers. From Fig. 6a, one can capture that a \(50\%\) dropout does not have an impact on the expressibility of the model, with exception made for the cases with less than \(5\) layers; above this threshold the model is overparametrized and the expressibility has the same value as the model without dropout. In [44] it is argued that quantum dropout outperforms \(L_{1}\) and \(L_{2}\) regularization due to its ability to limit the expressibility of the model. They also suggest that regularization allows full model expressibility, consequently necessitating a higher number of training epochs to reach an optimal solution. Here we show that these assertions are false. To start with, our findings indicate that quantum dropout does not curtail the model's expressibility, as indicated in Fig. 6a. On the contrary, it introduces a form of structured randomness into the quantum model, effectively helping it avoid overfitting without limiting its ability to represent complex functions. Secondly, \(L_{1}\) and \(L_{2}\) regularization methods work by adding a penalty term to the loss function, which encourages the model to keep its parameters small, as highlighted in [44] Sec. 3.4. Therefore, adopting this approach during the training of a QNN may be very restrictive as it does not allow the full exploration of the Hilbert space [38]: by coercing parameter values to zero, regularization may limit the expressibility of the quantum model, which is exactly the opposite of what the authors in [44] suggested. The regression QNN architecture with \(10\) layers produces an average level of multipartite entanglement which is slightly above the Genuine Multipartite Entanglement (GME) threshold (the same holds for the classification QNN as reported in Appendix E), see Fig. 6b. Even in the case of entanglement applying dropout with a probability of about \(50\%\) does not reduce its average level at the depth utilised for the learning task. The final average value of entanglement reached with \(10\) layers is the one expected for random states (Haar states) and this witnesses again that the QNN has high expressibility and that quantum dropout does not hinder expressibility. For both expressibility and entanglement, we estimated average and standard deviation from \(15000\) states. See Appendices D and E for further details on these calculations. So, a natural question will be: why does the quantum dropout work if it does not modify the QNN features on average? To understand this we have to interpret the quantum dropout as sampling a different sub-architecture from the original QNN at each training step, leading to training a whole ensemble at the same time. Just like classical dropout, this prevents hidden units (rotations in the QNN) from relying too much on their neighbourhoods. However, the rotations belonging to the same layer (or sub-layer) still have different roles in changing the state. Consequently, quantum dropout does not work well with excessively high drop ratios because, in that case, we will have very different models producing different results at each epoch and this will definitely not help the optimization process. ### Parameters scaling To completely match classical and quantum dropout, one should scale the weights in the layers where dropout was applied by a quantity \(s=1-p\), i.e. the probability of the computational units being active during training [6]. This ensures that the expected output of a computational unit during inference is the same as its expected output during training, i.e., it guarantees that the average output is the same for both training and inference. In classical NNs, the scaling helps in maintaining the statistical properties of the network and ensures that the network produces consistent and reliable results during inference. We now show that for deep QNNs the scaling factor \(s\) is not the same as in the classical case. Interestingly, our experiments have shown that by _not rescaling_ the parameters one achieves (on average) better results compared to the ones obtained by scaling with the classical linear scaling factor. To illustrate this finding, we have included a comparison of test errors obtained with and without rescaled parameters in our experiments, as shown in Fig. 7a, c, e. We find either negligible differences or no discernible change in the error rate across all quantum dropout strategies; the only exception is canonical-forward in the module regression task, which nevertheless shows a very little but not statistically significant improvement with classical rescaling. This general behaviour may be related to the fact that any change on one qubit could affect all the others, since we are working with qubits that in principle share multipartite entanglement, as explained above. Further details on the entanglement of the employed ansatz can be found in Appendix E. In addition, in a QNN we consider as neurons the rotations but, in practice, the units of computations are only the \(N\) qubits on which we act multiple times and this is a strong difference from classical NNs. Taking these two facts into account, one intu itive explanation could be that if we scale a parameter, the scaling factor is propagated across the whole structure to some extent multiplicatively. In this picture, to obtain the same effect of the classical scaling one might apply a scaling factor which is the \(k\)-root of the classical one, where \(k\) is some real number possibly dependent on the concentrable entanglement [47] in the QNN, which is directly related to the number of qubits N. Since \(p\leq 1\), if \(k>>1\implies 1/k\approx 0\) leads to: \[s=(1-p)^{1/k}\approx 1\,, \tag{8}\] corresponding to not scaling the parameters. We extended our analysis by exploring the use of a \(k\)-root attenuating factor for rescaling, as shown in Fig. (b)b, d, f. These results confirm our previous finding that rescaling does not provide any significant advantage in terms of generalization, because the general trend for all the quantum dropout strategies is to enhance the performance as \(k\) grows, i.e. \(s\) tends to 1. The canonical-forward strategy in the regression tasks is the only exception to this trend: we observed a slight yet not significant improvement in performance at \(k=2\) for the sin dataset and at \(k=1\) for the module dataset. In order to fully comprehend the mechanism behind quantum dropout and to unlock its potential as in the classical case, future works should dive deeper into the relationship between parameters scaling and entanglement within the QNN model. ## V Conclusions Starting from the comparison between classical and quantum NNs, we have proposed a general approach to exploit the dropout technique to avoid overfitting in learning with QNNs. In this framework, depending on the dropped unitary operations, we define and apply different quantum dropout strategies. Our results generalize and include special cases that are already present in the literature, such as entangling dropout [44]. In particular, our results highlight the importance of dropping parametrized operations, leading to improved generalization performance with respect to the case without dropout as well as to entangling dropout. In addition, aware of the importance of having an overparametrized QNN, we also provide guidelines on how to determine the maximally allowed dropout probability. In addition, we investigate for the first time the detailed behaviour of genuine QNN features upon quantum dropout application. Our analysis reveals that these techniques seem to better mitigate overfitting with moderate dropout probabilities, differently from the classical case where typical probability values are in the order 50-80%. In this working regime, quantum dropout does not reduce expressibility nor entanglement, as the QNN is still highly overparametrized. These findings are in contrast with conjectures previously made in the literature about the effectiveness of entangling dropout [44]. Last but not least, we show that quantum dropout does not require parameter rescaling, differently from the classical dropout technique applied to NNs. Interestingly, the performance is often actually worse when the variational parameters are rescaled. Figure 6: **(a)** Expressibility (in log scale) of QNN w.r.t. the employed layers. After a few layers, the expressibility is approximately the same for all dropout strategies. **(b)** Degree of concentrable entanglement (CE) produced on average by the QNN w.r.t. the employed layers. The CE soon reaches a plateau, witnessing the presence of a non-trivial amount of GME in the QNN. Even in the case of high drop probability (\(\sim 50\%\)), the mean entanglement produced by the QNN with 10 layers is the same for different dropout strategies. Figure 7: **Parameters scaling** Barchart comparing the best final performances of the QNNs without and with trained parameters scaling on the **(a)** sin dataset, on the **(c)** module dataset and on the **(e)** moon dataset. Standard deviation over 10 runs. **(b)**, **(d)** Average MSE and **(f)** average CCE trend when scaling the trained parameters with \(s=(1-p)^{1/k}\) as a function of \(k\). In conclusion, our findings witness that, once the QNN model is fixed, slight random modifications in the structure during the training procedure allow generalization when making inferences with the QNN original model. This work may pave the way for the efficient employment of Deep Quantum Machine Learning Models, thanks to a robust training methodology that encourages generalization. Future works may investigate the theoretical generalization of quantum dropout more in detail, as well as its effectiveness by running experiments on real quantum hardware. Also, the scaling of performances with an increasing number of qubits and the relationship between parameters rescaling and entanglement produced by the QNN are worth being further investigated. ## VI Methods To assess the effectiveness of our approach, we conduct two regression tasks and one binary classification task. In every regression experiment, the corresponding dataset (with 20 data samples) was divided into 75% train samples and 25% test samples, whereas for classification we have 50 data samples split into 40% train and 60% test in order to emphasize the overfitting phenomenon. In a data preprocessing phase, raw data were scaled in the range \([-1,1]\) to best suit the input and output of the QNNs; the scaler was fitted using training data only. To ensure a good and fast convergence, the models were trained with the Adam optimization algorithm [51] and a thorough grid search was conducted to tune the training hyperparameters. For the regression tasks, we observed that different hyperparameters led to similar results and did not impact substantially the outcomes of the experiments. As a consequence, and in order to make a fair comparison among all the QNN models, the hyperparameters remained the same for these experiments. The learning rate was set to 0.01, the number of training epochs was set to 1000 and the QNN was made up of 10 layers for regression. On the other hand, for the classification problem, the number of epochs was set to 5000, whereas the number of QNN layers was increased to 20 due to both the learning task and the different structure of the single layer, as already mentioned in Sec. IV.3. The drop rates space was extensively analysed in the case of sin dataset through a grid search with \(p_{L}\in[0.1,0.2,\ldots,0.7]\) and \(p_{G}\in[0.1,0.2,\ldots,0.9]\). This choice of hyperparameters was dictated by eq.5 for the regression QNN, for which one can safely drop up to 60% of the parameters (see Appendix B). For the other two datasets, we conducted a more limited and coarse-grained analysis, since we noticed that close drop probabilities led to similar performance and dropping many gates hindered the trainability. The MSE was selected as the loss function to train the networks and the error metric to evaluate the performances of the models in the regression tasks, as it is more sensitive to larger errors and a standard error metric in supervised learning. CCE was used as loss function for the classification task instead, while Accuracy score was employed as the main error metric to assess the goodness of the classification. In order to prove the robustness of our approach against random parameters initializations, we performed 10 runs of the algorithms for every test case. The QNN models were implemented in Python 3.8 with PennyLane [52], a framework that supports local simulations of quantum circuits and integration with NN optimization tools. To improve simulation times, we employed the JAX [53] linear algebra framework as the simulation backend. JAX is a software library for high-performance ML research which guarantees a fast and efficient way to execute quantum circuits through automatic differentiation and Just-in-time (JIT) circuit compilation. Due to the difficulty in simulating deep quantum circuits with a high degree of entanglement on local devices, JIT compilation has been particularly beneficial for the simulation of our experiments. A machine equipped with an AMD Ryzen 7" 5800X 8-Core CPU at 3.80 GHz and with 64 GB of RAM was used for the experiments.
2302.12553
Lower Bounds on the Depth of Integral ReLU Neural Networks via Lattice Polytopes
We prove that the set of functions representable by ReLU neural networks with integer weights strictly increases with the network depth while allowing arbitrary width. More precisely, we show that $\lceil\log_2(n)\rceil$ hidden layers are indeed necessary to compute the maximum of $n$ numbers, matching known upper bounds. Our results are based on the known duality between neural networks and Newton polytopes via tropical geometry. The integrality assumption implies that these Newton polytopes are lattice polytopes. Then, our depth lower bounds follow from a parity argument on the normalized volume of faces of such polytopes.
Christian Haase, Christoph Hertrich, Georg Loho
2023-02-24T10:14:53Z
http://arxiv.org/abs/2302.12553v1
# Lower Bounds on the Depth of Integral ReLU Neural Networks via Lattice Polytopes ###### Abstract We prove that the set of functions representable by ReLU neural networks with integer weights strictly increases with the network depth while allowing arbitrary width. More precisely, we show that \(\lceil\log_{2}(n)\rceil\) hidden layers are indeed necessary to compute the maximum of \(n\) numbers, matching known upper bounds. Our results are based on the known duality between neural networks and Newton polytopes via tropical geometry. The integrality assumption implies that these Newton polytopes are lattice polytopes. Then, our depth lower bounds follow from a parity argument on the normalized volume of faces of such polytopes. ## 1 Introduction Classical results in the area of understanding the expressivity of neural networks are so-called _universal approximation theorems_(Cybenko, 1989; Hornik, 1991). They state that shallow neural networks are already capable of approximately representing every continuous function on a bounded domain. However, in order to gain a complete understanding of what is going on in modern neural networks, we would also like to answer the following question: what is the precise set of functions we can compute _exactly_ with neural networks of a certain depth? For instance, insights about exact representability have recently boosted our understanding of the computational complexity to train neural networks in terms of both, algorithms (Arora et al., 2018; Khalife and Basu, 2022) and hardness results (Goel et al., 2021; Froese et al., 2022; Bertschinger et al., 2022). Arguably, the most prominent activation function nowadays is the _rectified linear unit_ (ReLU) (Glorot et al., 2011; Goodfellow et al., 2016). While its popularity is primarily fueled by intuition and empirical success, replacing previously used smooth activation functions like sigmoids with ReLUs has some interesting implications from a mathematical perspective: suddenly methods from discrete geometry studying piecewise linear functions and polytopes play a crucial role in understanding neural networks (Arora et al., 2018; Zhang et al., 2018; Hertrich et al., 2021) supplementing the traditionally dominant analytical point of view. A fundamental result in this direction is by Arora et al. (2018), who show that a function is representable by a ReLU neural network if and only if it is _continuous and piecewise linear_ (CPWL). Moreover, their proof implies that \(\lceil\log_{2}(n+1)\rceil\) many hidden layers are sufficient to represent every CPWL function with \(n\)-dimensional input. A natural follow-up question is the following: is this logarithmic number of layers actually necessary or can shallower neural networks already represent all CPWL functions? Hertrich et al. (2021) conjecture that the former alternative is true. More precisely, if \(\mathrm{ReLU}_{n}(k)\) denotes the set of CPWL functions defined on \(\mathbb{R}^{n}\) and computable with \(k\) hidden layers, the conjecture can be formulated as follows: **Conjecture 1** (Hertrich et al. (2021)).: \(\mathrm{ReLU}_{n}(k-1)\subsetneq\mathrm{ReLU}_{n}(k)\) _for all \(k\leq\lceil\log_{2}(n+1)\rceil\)._ Note that \(\mathrm{ReLU}_{n}(\lceil\log_{2}(n+1)\rceil)\) is the entire set of CPWL functions defined on \(\mathbb{R}^{n}\) by the result of Arora et al. (2018). While Hertrich et al. (2021) provide some evidence for their conjecture, it remains open for every input dimension \(n\geq 4\). Even more drastically, there is not a single CPWL function known for which one can prove that two hidden layers are not sufficient to represent it. Even for a function as simple as \(\max\{0,x_{1},x_{2},x_{3},x_{4}\}\), it is unknown whether two hidden layers are sufficient. In fact, \(\max\{0,x_{1},x_{2},x_{3},x_{4}\}\) is not just an arbitrary example. Based on a result by Wang & Sun (2005), Hertrich et al. (2021) show that their conjecture is equivalent to the following statement. **Conjecture 2** (Hertrich et al. (2021)).: _For \(n=2^{k}\), the function \(\max\{0,x_{1},\ldots,x_{n}\}\) is not contained in \(\mathrm{ReLU}_{n}(k)\)._ This reformulation gives rise to interesting interpretations in terms of two elements commonly used in practical neural network architectures: _max-pooling_ and _maxout_. Max-pooling units are used between (ReLU or other) layers and simply output the maximum of several inputs (that is, "pool" them together). They do not contain trainable parameters themselves. In contrast, maxout networks are an alternative to (and in fact a generalization of) ReLU networks. Each neuron in a maxout network outputs the maximum of several (trainable) affine combinations of the outputs in the previous layer, in contrast to comparing a single affine combination with zero as in the ReLU case. Thus, the conjecture would imply that one needs in fact logarithmically many ReLU layers to replace a max-pooling unit or a maxout layer, being a theoretical justification that these elements are indeed more powerful than pure ReLU networks. ### Our Results In this paper we prove that the conjecture by Hertrich et al. (2021) is true for all \(n\in\mathbb{N}\) under the additional assumption that all weights in the neural network are restricted to be integral. In other words, if \(\mathrm{ReLU}_{n}^{2}(k)\) is the set of functions defined on \(\mathbb{R}^{n}\) representable with \(k\) hidden layers and only integer weights, we show the following. **Theorem 3**.: _For \(n=2^{k}\), the function \(\max\{0,x_{1},\ldots,x_{n}\}\) is not contained in \(\mathrm{ReLU}_{n}^{2}(k)\)._ Proving Theorem 3 is our main contribution. The overall strategy is highlighted in Section 1.2. We put all puzzle pieces together and provide a formal proof in Section 4. The arguments in Hertrich et al. (2021) can be adapted to show that the equivalence between the two conjectures is also valid in the integer case. Thus, we obtain that adding more layers to an integral neural network indeed increases the set of representable functions up to a logarithmic number of layers. A formal proof can be found in Section 4. **Corollary 4**.: \(\mathrm{ReLU}_{n}^{2}(k-1)\subsetneq\mathrm{ReLU}_{n}^{2}(k)\) _for all \(k\leq\lceil\log_{2}(n+1)\rceil\)._ To the best of our knowledge, our result is the first non-constant (namely logarithmic) lower bound on the depth of ReLU neural networks without any restriction on the width. Without the integrality assumption, the best known lower bound remains two hidden layers Mukherjee & Basu (2017), which is already valid for the simple function \(\max\{0,x_{1},x_{2}\}\). While the integrality assumption is rather implausible for practical neural network applications where weights are usually tuned by gradient descent, from a perspective of analyzing the theoretical expressivity, the assumption is arguably plausible. To see this, suppose a ReLU network represents the function \(\max\{0,x_{1},\ldots,x_{n}\}\). Then every fractional weight must either cancel out or add up to some integers with other fractional weights, because every linear piece in the final function has only integer coefficients. Hence, it makes sense to assume that no fractional weights exist in the first place. However, unfortunately, this intuition cannot easily be turned into a proof because it might happen that combinations of fractional weights yield integer coefficients which could not be achieved without fractional weights. ### Outline of the Argument The first ingredient of our proof is to apply previous work about connections between neural networks and tropical geometry, initiated by Zhang et al. (2018). Every CPWL function can be decomposed as a difference of two convex CPWL functions. Convex CPWL functions admit a neat duality to certain polytopes, known as _Newton polytopes_ in tropical geometry. Given any neural network, this duality makes it possible to construct a pair of Newton polytopes which uniquely determines the CPWL function represented by the neural network. These Newton polytopes are constructed, layer by layer, from points by alternatingly taking Minkowski sums and convex hulls. The number of alternations corresponds to the depth of the neural network. Thus, analyzing the set of functions representable by neural networks with a certain depth is equivalent to understanding which polytopes can be constructed in this manner with a certain number of alternations (Theorem 8). Having translated the problem into the world of polytopes, the second ingredient is to gain a better understanding of the two operations involved in the construction of these polytopes: Minkowski sums and convex hulls. We show for each of the two operations that the result can be subdivided into polytopes of easier structure: For the Minkowski sum of two polytopes, each cell in the subdivision is an _affine product_ of faces of the original polytopes, that is, a polytope which is affinely equivalent to a Cartesian product (Proposition 9). For the convex hull of two polytopes, each cell is a _join_ of two faces of the original polytopes, that is, a convex hull of two faces whose affine hulls are _skew_ to each other, which means that they do not intersect and do not have parallel directions (Proposition 10). Finally, with these subdivisions at hand, our third ingredient is the volume of these polytopes. Thanks to our integrality assumption, all coordinates of all vertices of the relevant polytopes are integral, that is, these polytopes are _lattice polytopes_. For lattice polytopes, one can define the so-called _normalized volume_ (Section 2.3). This is a scaled version of the Euclidean volume with scaling factor depending on the affine hull of the polytope. It is constructed in such a way that it is integral for all lattice polytopes. Using the previously obtained subdivisions, we show that the normalized volume of a face of dimension at least \(2^{k}\) of a polytope corresponding to a neural network with at most \(k\) hidden layers has always even normalized volume (Theorem 16). This implies that not all lattice polytopes can be constructed this way. Using again the tropical geometry inspired duality between polytopes and functions (Theorem 8), we translate this result back to the world of CPWL functions computed by neural networks and obtain that \(k\) hidden layers are not sufficient to compute \(\max\{0,x_{1},\ldots,x_{2^{k}}\}\), proving Theorem 3. ### Beyond Integral Weights Given the integrality assumption, a natural thought is whether one can simply generalize our results to rational weights by multiplying all weights of the neural network with the common denominator. Unfortunately this does not work. Our proof excludes that an integral ReLU network with \(k\) hidden layers can compute \(\max\{0,x_{1},\ldots,x_{2^{k}}\}\), but it does not exclude the possibility to compute \(2\cdot\max\{0,x_{1},\ldots,x_{2^{k}}\}\) with integral weights. In particular, dividing the weights of the output layer by two might result in a half-integral neural network computing \(\max\{0,x_{1},\ldots,x_{2^{k}}\}\). In order to tackle the conjecture in full generality, that is, for arbitrary weights, arguing via the parity of volumes seems not to be sufficient. The parity argument is inherently discrete and suffers from the previously described issue. Nevertheless, we are convinced that the techniques of this paper do in fact pave the way towards resolving the conjecture in full generality. In particular, the subdivisions constructed in Section 3 are valid for arbitrary polytopes and not only for lattice polytopes. Hence, it is conceivable that one can replace the parity of normalized volumes with a different invariant which, applied to the subdivisions, yields a general depth-separation for the non-integer case. ### Further Relations to Previous Work Similar to our integrality assumption, also Hertrich et al. (2021) use an additional assumption and prove their conjecture in a special case for so-called _\(H\)-conforming_ neural networks. Let us note that the two assumptions are incomparable: there are \(H\)-conforming networks with non-integral weights as well as integral neural networks which are not \(H\)-conforming. Our results are related to (but conceptually different from) so-called _trade-off_ results between depth and width, showing that a slight decrease of depth can exponentially increase the required width to maintain the same (exact or approximate) expressive power. Telgarsky (2015, 2016) proved the first results of this type and Eldan & Shamir (2016) even proved an exponential separation between two and three layers. Lots of other improvements and related results have been established (Arora et al., 2018; Daniely, 2017; Hanin, 2019; Hanin & Sellke, 2017; Liang & Srikant, 2017; Nguyen et al., 2018; Raghu et al., 2017; Safran & Shamir, 2017; Safran et al., 2019; Vardi et al., 2021; Yarotsky, 2017). In contrast, we focus on exact representations, where we show an even more drastic trade-off: decreasing the depth from logarithmic to sublogarithmic yields that no finite width is sufficient at all any more. The duality between CPWL functions computed by neural networks and Newton polytopes inspired by tropical geometry has been used in several other works about neural networks before (Maragos et al., 2021), for example to analyze the shape of decision boundaries (Alfarra et al., 2020; Zhang et al., 2018) or to count and bound the number of linear pieces (Charisopoulos & Maragos, 2018; Hertrich et al., 2021; Montufar et al., 2022). ## 2 Preliminaries We write \([n]\coloneqq\{1,2,\ldots,n\}\) for the set of natural numbers up to \(n\) (without zero). ### ReLU Neural Networks For any \(n\in\mathbb{N}\), let \(\sigma\colon\mathbb{R}^{n}\to\mathbb{R}^{n}\) be the component-wise _rectifier_ function \[\sigma(x)=(\max\{0,x_{1}\},\max\{0,x_{2}\},\ldots,\max\{0,x_{n}\}).\] For any _number of hidden layers_\(k\in\mathbb{N}\), a \((k+1)\)_-layer feedforward neural network with rectified linear units_ (ReLU neural network) is given by \(k+1\) affine transformations \(T^{(\ell)}\colon\mathbb{R}^{n_{\ell-1}}\to\mathbb{R}^{n_{\ell}}\), \(x\mapsto A^{(\ell)}x+b^{(\ell)}\), for \(\ell\in[k+1]\). It is said to _compute_ or _represent_ the function \(f\colon\mathbb{R}^{n_{0}}\to\mathbb{R}^{n_{k+1}}\) given by \[f=T^{(k+1)}\circ\sigma\circ T^{(k)}\circ\sigma\circ\cdots\circ T^{(2)}\circ \sigma\circ T^{(1)}.\] The matrices \(A^{(\ell)}\in\mathbb{R}^{n_{\ell}\times n_{\ell-1}}\) are called the _weights_ and the vectors \(b^{(\ell)}\in\mathbb{R}^{n_{\ell}}\) are the _biases_ of the \(\ell\)-th layer. The number \(n_{\ell}\in\mathbb{N}\) is called the _width_ of the \(\ell\)-th layer. The maximum width of all hidden layers \(\max_{\ell\in[k]}n_{\ell}\) is called the _width_ of the neural network. Further, we say that the neural network has _depth_\(k+1\) and _size_\(\sum_{\ell=1}^{k}n_{\ell}\). Often, neural networks are represented as layered, directed, acyclic graphs where each dimension of each layer (including _input layer_\(\ell=0\) and _output layer_\(\ell=k+1\)) is one vertex, weights are arc labels, and biases are node labels. The vertices of this graph are called _neurons_. Each neuron computes an affine transformation of the outputs of their predecessors, applies the ReLU function on top of that, and outputs the result. ### Polytopes & Lattices We give a brief overview of necessary notions for polytopes and lattices and refer to (Schrijver, 1986, Ch. 4,7-9) for further reading. For two arbitrary sets \(X,Y\subseteq\mathbb{R}^{n}\), one can define their _Minkowski sum_\(X+Y=\{x+y\mid x\in X,\,y\in Y\}\). By \(\mathrm{Span}(X)\) we denote the usual linear hull of the set \(X\), that is, the smallest linear subspace containing \(X\). The affine hull \(\mathrm{Aff}(X)\) is the smallest affine subspace containing \(X\). Finally, with a set \(X\) we associate a linear space \(\mathrm{Lin}(X)=\mathrm{Span}\left\{x-y\mid x,y\in X\right\}\), which is \(\mathrm{Aff}(X)\) shifted to the origin. Note that \(\mathrm{Lin}(X)\) is usually a strict subspace of \(\mathrm{Span}(X)\), unless \(\mathrm{Aff}(X)\) contains the origin, in which case all three notions coincide. For a set \(X\subset\mathbb{R}^{n}\), its _convex hull_ is \[\mathrm{conv}(X)=\left\{\sum_{s\in S}\lambda_{s}s\mid S\subseteq X\text{ finite},\lambda_{s}\geq 0\text{ for all }s\in S,\sum_{s\in S}\lambda_{s}=1\right\}\enspace.\] A _polytope_ is the convex hull of a finite set \(V\subset\mathbb{R}^{n}\). Given a polytope \(P\subset\mathbb{R}^{n}\), a _face_ of \(P\) is a set of the form \(\arg\min\left\{c^{\top}x\mid x\in P\right\}\) for some \(c\in\mathbb{R}^{n}\); a face of a polytope is again a polytope. The _dimension_\(\dim(P)\) of \(P\) is the dimension of \(\mathrm{Lin}(P)\). By convention, we also call the empty set a face of \(P\) of dimension \(-1\). A face of dimension \(0\) is a _vertex_ and a face of dimension \(\dim(P)-1\) is a _facet_. If all vertices belong to \(\mathbb{Q}^{n}\), we call \(P\)_rational_; even more, \(P\) is a _lattice polytope_ if all its vertices are _integral_, that means they lie in \(\mathbb{Z}^{n}\). An important example of a (lattice) polytope is the _simplex_\(\Delta_{0}^{n}=\operatorname{conv}\{0,e_{1},e_{2},\ldots,e_{n}\}\subseteq \mathbb{R}^{n}\) spanned by the origin and all \(n\) standard basis vectors. Finally, we define _(regular polyhedral) subdivisions_ of a polytope. Let \(\tilde{P}\subset\mathbb{R}^{n+1}\) be a polytope and let \(P\subset\mathbb{R}^{n}\) be its image under the projection \(\operatorname{proj}_{[n]}\colon\mathbb{R}^{n+1}\to\mathbb{R}^{n}\) forgetting the last coordinate. A _lower face_ of \(\tilde{P}\) is a set of minimizers with respect to a linear objective \((c^{\top},1)\in\mathbb{R}^{n}\times\mathbb{R}\). The projection of the lower faces of \(\tilde{P}\) to \(P\subset\mathbb{R}^{n}\) forms a _subdivision_ of \(P\), that is, a collection of polytopes which intersect in common faces and cover \(P\). An example of a subdivision of a polytope in the plane is given in Figure 1. A _lattice_ is a subset of \(\mathbb{R}^{n}\) of the form \[L=\{B\cdot z\mid z\in\mathbb{Z}^{p}\}\] for some matrix \(B\in\mathbb{R}^{n\times p}\) with linearly independent columns. In this case, we call the columns \(b^{(1)},\ldots,b^{(p)}\) of \(B\) a _lattice basis_. Every element in \(L\) can be written uniquely as a linear combination of \(b^{(1)},\ldots,b^{(p)}\) with integer coefficients. A choice of lattice basis identifies \(L\) with \(\mathbb{Z}^{p}\). The classic example of a lattice is \(\mathbb{Z}^{n}\) itself. In this paper, we will only work with lattices that arise as \[\{x\in\mathbb{Z}^{n}\mid A\cdot x=0\}\enspace,\] for some \(A\in\mathbb{Q}^{q\times n}\), that is, the intersection of \(\mathbb{Z}^{n}\) with a rational subspace of \(\mathbb{R}^{n}\). For example, the vectors \(\binom{2}{1},\binom{1}{2}\) form a basis of \(\mathbb{R}^{2}\) as a vector space, but they do not form a lattice basis of \(\mathbb{Z}^{2}\). Instead, they generate the smaller lattice \(\{x\in\mathbb{Z}^{2}\mid 3\text{ divides }x_{1}+x_{2}\}\). Choosing two bases for the same lattice yields a change of bases matrix with integral entries whose inverse is also integral. It must therefore have determinant \(\pm 1\). We will mainly deal with _affine lattices_, that is, sets of the form \(K=L+v\) where \(L\subset\mathbb{R}^{n}\) is a lattice and \(v\in\mathbb{R}^{n}\). Then, \(b_{0},\ldots,b_{r}\in K\) form an _affine lattice basis_ if \(b_{1}-b_{0},\ldots,b_{r}-b_{0}\) is a (linear) lattice basis of the linear lattice \(L=K-K\) parallel to \(L\), that is, every element of \(K\) is a unique affine combination of \(b_{0},\ldots,b_{r}\) with integral coefficients. ### Normalized volume The main tool in our proof is the _normalized volume_ of faces of (Newton) polytopes. We measure the volume of a (possibly lower dimensional) rational polytope \(P\subset\mathbb{R}^{n}\) inside its affine hull \(\operatorname{Aff}(P)\). For this, we choose a lattice basis of \(\operatorname{Lin}(P)\cap\mathbb{Z}^{n}\) inside the linear space \(\operatorname{Lin}(P)\) parallel to \(\operatorname{Aff}(P)\). Since \(P\) is a rational polytope, this lattice basis gives rise to a linear transform from \(\operatorname{Lin}(P)\) onto \(\mathbb{R}^{r}\) (\(r=\dim P\)) by mapping the lattice basis vectors to the standard unit vectors. Now, the _normalized volume_\(\operatorname{Vol}(P)\) of \(P\) is \(r!\) times the Euclidean volume of its image under this transformation. The scaling factor \(r!\) is chosen so that, for each \(n\geq 1\), the normalized volume of the simplex \(\Delta_{0}^{n}\) is one. Figure 1: A regular subdivision: the quadrilateral in \(\mathbb{R}^{2}\) is subdivided into six full-dimensional cells which arise as projections of lower faces of the convex hull of the lifted points in \(\mathbb{R}^{3}\). The fact that any two lattice bases differ by an integral matrix with integral inverse (and hence determinant \(\pm 1\)) ensures that this is well defined. For full-dimensional polytopes, this yields just a scaled version of the Euclidean volume. But for lower-dimensional polytopes, our normalization with respect to the lattice differs from the Euclidean normalization (cf. Figure 2). For us, the crucial property is that every lattice polytope has integral volume (see, e.g., (Beck & Robins, 2007, SS3.5,SS5.4)). This will allow us to argue using divisibility properties of volumes. We give visualizations of the following fundamental statements and defer the actual proofs to the appendix. **Lemma 5**.: _Let \(P,Q\subset\mathbb{R}^{n}\) be two lattice polytopes with \(i=\dim(P)\) and \(j=\dim(Q)\). If we have \(\dim(P+Q)=\dim(P)+\dim(Q)\) then_ \[\binom{i+j}{i}\cdot\operatorname{Vol}(P)\cdot\operatorname{Vol}(Q)\;\;\text{ divides}\;\;\operatorname{Vol}(P+Q)\,.\] If \(P\) and \(Q\) fulfill the assumptions of Lemma 5, then we call \(P+Q\) an _affine product_ of \(P\) and \(Q\). This is equivalent to the definition given in the introduction; we visualize the Cartesian product \(P\times Q\) for comparison in Figure 3. **Lemma 6**.: _Let \(P,Q\subset\mathbb{R}^{n}\) be two lattice polytopes with \(i=\dim(P)\) and \(j=\dim(Q)\). If \(\dim(\operatorname{conv}(P\cup Q))=\dim(P)+\dim(Q)+1\) then_ \[\operatorname{Vol}(P)\cdot\operatorname{Vol}(Q)\;\;\text{divides}\;\; \operatorname{Vol}(\operatorname{conv}(P\cup Q))\,.\] If \(P\) and \(Q\) fulfill the assumptions of Lemma 6, then we call \(\operatorname{conv}(P\cup Q)\) the _join_ of \(P\) and \(Q\), compare Figure 4. This definition is equivalent with the one given in the introduction. ### Newton Polytopes and Neural Networks The first ingredient to prove our main result is to use a previously discovered duality between CPWL functions and polytopes inspired by tropical geometry in order to translate the problem into the world of polytopes. As a first step, let us observe that we may restrict ourselves to neural networks without biases. For this, we say a function \(g\colon\mathbb{R}^{n}\to\mathbb{R}^{m}\) is _positively homogeneous_ if it satisfies \(g(\lambda x)=\lambda g(x)\) for all \(\lambda\geq 0\). **Lemma 7** (Hertrich et al. (2021)).: _If a neural network represents a positively homogeneous function, then the same neural network with all bias parameters \(b^{(\ell)}\) set to zero represents exactly the same function._ Since \(\max\{0,x_{1},\dots,x_{2^{k}}\}\) is positively homogeneous, this implies that we can focus on neural networks without biases in order to prove Theorem 3. Functions computed by such neural networks are always positively homogeneous. Figure 3: The normalized volume of an affine product of \(P\) and \(Q\) can be calculated as the volume of the Cartesian product times the integral absolute value of the determinant of the linear map sending \(P\times Q\) (left) to \(P+Q\) (right): \(\operatorname{Vol}(P+Q)=\binom{1+1}{1}\cdot\left|\det\left(\begin{smallmatrix} -1&2\\ 2&1\end{smallmatrix}\right)\right|\cdot\operatorname{Vol}(P)\cdot\operatorname{ Vol}(Q)\). Figure 2: Some lattice segments and lattice polygons with their normalized volumes. Note that the normalized volume of a lattice polytope differs from its Euclidean volume as it is measured with respect to the induced lattice in its affine hull. Let \(f\) be a positively homogeneous, convex CPWL function. Convexity implies that we can write \(f\) as the maximum of its linear pieces, that is, \(f(x)=\max\{a_{1}^{\top}x,\ldots,a_{p}^{\top}x\}\). The _Newton polytope_\(P_{f}\) corresponding to \(f\) is defined as the convex hull of the coefficient vectors, that is, \(P_{f}=\mathrm{conv}\{a_{1},\ldots,a_{p}\}\). It turns out that the two basic operations \(+\) and max (applied pointwise) on functions translate nicely to Minkowski sum and convex hull for the corresponding Newton polytopes: \(P_{f+g}=P_{f}+P_{g}\) and \(P_{\max\{f,g\}}=\mathrm{conv}\{P_{f}\cup P_{g}\}\), compare Zhang et al. (2018). Since a neural network is basically an alternation of affine transformations (that is, weighted sums) and maxima computations, this motivates the following definition of classes \(\mathcal{P}_{k}\) of polytopes which intuitively correspond to integral neural networks with \(k\) hidden layers. Note that the class \(\mathcal{P}_{k}\) contains polytopes of different dimensions. We begin with \(\mathcal{P}_{0}\), the set of lattice points. For \(k\geq 0\), we define \[\mathcal{P}^{\prime}_{k+1} =\{\mathrm{conv}(Q_{1}\cup Q_{2})\mid Q_{1},Q_{2}\in\mathcal{P}_ {k}\}\enspace, \tag{1}\] \[\mathcal{P}_{k+1} =\left\{Q_{1}+\cdots+Q_{\ell}\mid Q_{1},\ldots,Q_{\ell}\in \mathcal{P}^{\prime}_{k+1}\right\}\enspace.\] The correspondence of \(\mathcal{P}_{k}\) to a neural network with \(k\) hidden layers can be made formal by the following. The difference in the theorem accounts for the fact that \(f\) is not necessarily convex, and even if it is, intermediate functions computed by the neural network might be non-convex. **Theorem 8**.: _A positively homogeneous CPWL function \(f\) can be represented by an integral \(k\)-hidden-layer neural network if and only if \(f=g-h\) for two convex, positively homogeneous CPWL functions \(g\) and \(h\) with \(P_{g},P_{h}\in\mathcal{P}_{k}\)._ A proof of the same theorem for the non-integral case can be found in Hertrich (2022, Thm. 3.35). A careful inspection of the proof therein reveals that it carries over to the integral case. For the sake of completeness, we provide a proof in the appendix. ## 3 Subdividing Minkowski Sums and Convex Hulls The purpose of this section is to develop the main geometric tool of our proof: subdividing Minkowski sums and unions into affine products and joins, respectively. More precisely, we show the following two statements. They are valid for _general_ polytopes and not only for lattice polytopes. **Proposition 9**.: _For two polytopes \(P\) and \(Q\) in \(\mathbb{R}^{n}\), there exists a subdivision of \(P+Q\) such that each full-dimensional cell is an affine product of a face of \(P\) and a face of \(Q\)._ **Proposition 10**.: _For two polytopes \(P\) and \(Q\) in \(\mathbb{R}^{n}\), there exists a subdivision of \(\mathrm{conv}\{P\cup Q\}\) such that each full-dimensional cell is a join of a face of \(P\) and a face of \(Q\)._ The strategy to prove these statements is to lift the polytopes \(P\) and \(Q\) by one dimension to \(\mathbb{R}^{n+1}\) in a _generic_ way, perform the respective operation (Minkowski sum or convex hull) in this space, and obtain the subdivision by projecting the _lower faces_ of the resulting polytope in \(\mathbb{R}^{n+1}\) back to \(\mathbb{R}^{n}\). Figure 4: The normalized volume of a join of \(P\) and \(Q\) can be calculated as the product of the normalized volumes of \(P\) and \(Q\) times the integral absolute value of the determinant of the linear map sending \(P\star Q\) (left) to \(\mathrm{conv}(P\cup Q)\) (right): \(\mathrm{Vol}(\mathrm{conv}(P\cup Q))=\left|\det\left(\begin{smallmatrix}-2&0&0 \\ 2&1&-1\\ 0&1&1\end{smallmatrix}\right)\right|\cdot\mathrm{Vol}(P)\cdot\mathrm{Vol}(Q)\), where \(P\star Q\) denotes an embedding of \(P\) and \(Q\) in orthogonal subspaces with distance \(1\). More precisely, given an \(\alpha\in\mathbb{R}^{n}\) and \(\beta\in\mathbb{R}\), we define \[P^{\mathbf{0}}\coloneqq\left\{(p,0)\mid p\in P\right\}\quad\text{and}\quad Q^{ \alpha,\beta}\coloneqq\left\{(q,\alpha^{\top}q+\beta)\mid q\in Q\right\}\enspace.\] Note that the projections \(\text{proj}_{[n]}(P^{\mathbf{0}})\) and \(\text{proj}_{[n]}(Q^{\alpha,\beta})\) onto the first \(n\) coordinates result in \(P\) and \(Q\), respectively. Moreover, we obtain subdivisions of \(P+Q\) and \(\operatorname{conv}\{P\cup Q\}\) by projecting down the lower faces of \(P^{\mathbf{0}}+Q^{\alpha,\beta}\) and \(\operatorname{conv}\{P^{\mathbf{0}}\cup Q^{\alpha,\beta}\}\), respectively, to the first \(n\) coordinates. It remains to show that the parameters can be chosen so that the subdivisions have the desired properties. To this end, we introduce the following notation for faces of the respective polytopes. For each \(c\in\mathbb{R}^{n}\setminus\{0\}\), let \[F_{c}^{\mathbf{0}}=\arg\min\left\{(c^{\top},1)z\mid z\in P^{\mathbf{0}} \right\}\quad\text{and}\quad G_{c}^{\alpha,\beta}=\arg\min\left\{(c^{\top},1 )z\mid z\in Q^{\alpha,\beta}\right\}\enspace,\] as well as \[F_{c}=\arg\min\{c^{\top}x\mid x\in P\}\quad\text{and}\quad G_{c}=\arg\min\{(c+ \alpha)^{\top}x\mid x\in Q\}\enspace.\] Observe that \(F_{c}=\text{proj}_{[n]}(F_{c}^{\mathbf{0}})\) and \(G_{c}=\text{proj}_{[n]}(G_{c}^{\alpha,\beta})\). With this, we can finally specify what we mean by "choosing \(\alpha\) and \(\beta\)_generically_": it means that the linear and affine hulls of \(F_{c}\) and \(G_{c}\) intersect as little as possible. **Lemma 11**.: _One can choose \(\alpha\) and \(\beta\) such that \(\operatorname{Lin}F_{c}\cap\operatorname{Lin}G_{c}=\{\mathbf{0}\}\) and \(\operatorname{Aff}F_{c}^{\mathbf{0}}\cap\operatorname{Aff}G_{c}^{\alpha,\beta}=\emptyset\) for every \(c\in\mathbb{R}^{n}\setminus\{0\}\)._ Using the lemma about generic choices of \(\alpha\) and \(\beta\), we have the tool to prove the existence of the desired subdivisions. The actual proofs are given in the appendix. ## 4 Using Normalized Volume to Prove Depth Lower Bounds In this section we complete our proof by applying a parity argument on the normalized volume of cells in the subdivisions constructed in the previous section. Let \(\mathcal{Q}_{k}\) be the set of lattice polytopes \(P\) with the following property: for every face \(F\) of \(P\) with \(\dim(F)\geq 2^{k}\) we have that \(\operatorname{Vol}(F)\) is even. Note that the class \(\mathcal{Q}_{k}\) contains polytopes of different dimensions and, in particular, all lattice polytopes of dimension smaller than \(2^{k}\). Our plan to prove Theorem 3 is as follows. We first show that the classes \(\mathcal{Q}_{k}\) are closed under Minkowski addition and that taking unions of convex hulls of polytopes in \(\mathcal{Q}_{k}\) always gives a polytope in \(\mathcal{Q}_{k+1}\). Using this, induction guarantees that \(\mathcal{P}_{k}\subseteq\mathcal{Q}_{k}\). We then show that adding the simplex \(\Delta_{0}^{2^{k}}\) to a polytope in \(\mathcal{Q}_{k}\) gives a polytope which is never contained in \(\mathcal{Q}_{k}\). Combining this separation result with \(\mathcal{P}_{k}\subseteq\mathcal{Q}_{k}\) and Theorem 8 allows us to prove Theorem 3. The general intuition behind most of the proofs in this section is to use the subdivisions constructed in the previous section and argue about the volume of each cell in the subdivision separately, using the lemmas of Section 2.3. ### Closedness of \(\mathcal{Q}\) **Proposition 12**.: _For \(P,Q\in\mathcal{Q}_{k}\), we have that \(P+Q\in\mathcal{Q}_{k}\)._ The proof of Proposition 12 is based on the following lemma. **Lemma 13**.: _If \(d\coloneqq\dim(P+Q)\geq 2^{k}\), then \(\operatorname{Vol}(P+Q)\) is even._ Proof of Lemma 13.: By Proposition 9, it follows that \(P+Q\) can be subdivided such that each full-dimensional cell \(C\) in the subdivision is an affine product of two faces \(F\) and \(G\) of \(P\) and \(Q\), respectively. Let \(i\coloneqq\dim(F)\) and \(j\coloneqq\dim(G)\), then \(d=i+j\). By Lemma 5, it follows that \(\operatorname{Vol}(C)=z\cdot\binom{d}{i}\cdot\operatorname{Vol}(F)\cdot \operatorname{Vol}(G)\) for some \(z\in\mathbb{Z}\). We argue now that this quantity is always even. If either \(i\) or \(j\) is at least \(2^{k}\), then \(\operatorname{Vol}(F)\) or \(\operatorname{Vol}(G)\) is even, respectively, because \(P\) and \(Q\) are contained in \(\mathcal{Q}_{k}\). Otherwise, \(d=i+j\geq 2^{k}\), but \(i,j<2^{k}\). In this case, \(\binom{d}{i}\) is always even, which is a direct consequence of Lucas' theorem (Lucas, 1878). In any case, for every cell \(C\) in the subdivision, \(\operatorname{Vol}(C)\) is even. Therefore, also the total volume of \(P+Q\) is even. Proof of Proposition 12.: To prove the proposition, we need to ensure that not only \(P+Q\), but every face of \(P+Q\) with dimension at least \(2^{k}\) has even normalized volume. If \(F\) is such a face, then, by basic properties of the Minkowski sum, it is the Minkowski sum of two faces \(P^{\prime}\) and \(Q^{\prime}\) of \(P\) and \(Q\), respectively. Since \(\mathcal{Q}_{k}\) is closed under taking faces, it follows that \(P^{\prime},Q^{\prime}\in\mathcal{Q}_{k}\). Hence, applying Lemma 13 to \(P^{\prime}\) and \(Q^{\prime}\), it follows that \(F\) has even volume. Doing so for all faces \(F\) with dimension at least \(2^{k}\) completes the proof. **Proposition 14**.: _For \(P,Q\in\mathcal{Q}_{k}\), we have that \(\operatorname{conv}(P\cup Q)\in\mathcal{Q}_{k+1}\)._ Again, the heart of the proof lies in the following lemma. **Lemma 15**.: _If \(d\coloneqq\dim(\operatorname{conv}(P\cup Q))\geq 2^{k+1}\), then \(\operatorname{Vol}(\operatorname{conv}(P\cup Q))\) is even._ We defer the proofs of Lemma 15 and Proposition 14 and to the Appendix as they are analogous to those of Proposition 12 and Lemma 13 building on the respective claims for convex hulls (Lemma 6 and Proposition 10) instead of Minkowski sums. **Theorem 16**.: _For all \(k\in\mathbb{N}\) it is true that \(\mathcal{P}_{k}\subseteq\mathcal{Q}_{k}\)._ Proof.: We prove this statement by induction on \(k\). The class \(\mathcal{P}_{0}\) contains only points, so no polytope in \(\mathcal{P}_{0}\) has a face of dimension at least \(2^{0}=1\). Therefore, it trivially follows that \(\mathcal{P}_{0}\subseteq\mathcal{Q}_{0}\), settling the induction start. The induction step follows by applying Proposition 12 and Proposition 14 to the definition (1) of the classes \(\mathcal{P}_{k}\). ### The odd one out The final ingredient for Theorem 3 is to show how \(\Delta_{0}^{n}\) breaks the structure of \(\mathcal{Q}_{k}\). **Proposition 17**.: _If \(P\in\mathcal{Q}_{k}\) is a polytope in \(\mathbb{R}^{n}\) with \(n=2^{k}\), then \(P+\Delta_{0}^{n}\notin\mathcal{Q}_{k}\)._ Proof.: Applying Proposition 9 to \(P\) and \(Q=\Delta_{0}^{n}\), we obtain that \(P+\Delta_{0}^{n}\) can be subdivided such that each full-dimensional cell \(C\) is an affine product of a face \(F\) of \(P\) and a face \(G\) of \(\Delta_{0}^{n}\). As in the proof of Lemma 13, it follows that all these cells have even volume, with one exception: if \(\dim F=0\) and \(\dim G=2^{k}\). Revisiting the proof of Proposition 9 shows that there exists exactly one such cell \(C\) in the subdivision. This cell is a translated version of \(\Delta_{0}^{n}\), so it has volume \(\operatorname{Vol}(C)=1\). Since all cells in the subdivision have even volume except for one cell with odd volume, we obtain that \(\operatorname{Vol}(P+\Delta_{0}^{n})\) is odd. Hence, \(P+\Delta_{0}^{n}\) cannot be contained in \(\mathcal{Q}_{k}\). **Theorem 3**.: _For \(n=2^{k}\), the function \(\max\{0,x_{1},\ldots,x_{n}\}\) is not contained in \(\operatorname{ReLU}_{n}^{2}(k)\)._ Proof.: Suppose for the sake of a contradiction that there is a neural network with integer weights and \(k\) hidden layers computing \(f(x)=\max\{0,x_{1},\ldots,x_{n}\}\). Since the Newton polytope of \(f\) is \(P_{f}=\Delta_{0}^{n}\), Theorem 8 yields the existence of two polytopes \(P,Q\in\mathcal{P}_{k}\) in \(\mathbb{R}^{n}\) with \(P+\Delta_{0}^{n}=Q\). By Theorem 16 we obtain \(P,Q\in\mathcal{Q}_{k}\). This is a contradiction to Proposition 17. From this we conclude that the set of functions representable with integral ReLU neural networks strictly increases when adding more layers. **Corollary 4**.: \(\operatorname{ReLU}_{n}^{2}(k-1)\subsetneq\operatorname{ReLU}_{n}^{2}(k)\) _for all \(k\leq\lceil\log_{2}(n+1)\rceil\)._ Proof.: Note that \(k\leq\lceil\log_{2}(n+1)\rceil\) implies \(2^{k-1}\leq n\). Let \(f\colon\mathbb{R}^{n}\to\mathbb{R}\) be the function \(f(x)=\max\{0,x_{1},\ldots,x_{2^{k-1}}\}\). By Theorem 3 we get \(f\notin\operatorname{ReLU}_{n}^{2}(k-1)\). In contrast, it is not difficult to see that \(k-1\) hidden layers with integer weights are sufficient to compute \(\max\{x_{1},\ldots,x_{2^{k-1}}\}\), compare for example Zhang et al. (2018); Hertrich (2022). Appending a single ReLU neuron to the output of this network implies \(f\in\operatorname{ReLU}_{n}^{2}(k)\). #### Acknowledgments Christian Haase has been supported by the German Science Foundation (DFG) under grant HA 4383/8-1. Christoph Hertrich is supported by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement ScaleOpt-757481).
2308.08550
Recurrent Neural Networks with more flexible memory: better predictions than rough volatility
We extend recurrent neural networks to include several flexible timescales for each dimension of their output, which mechanically improves their abilities to account for processes with long memory or with highly disparate time scales. We compare the ability of vanilla and extended long short term memory networks (LSTMs) to predict asset price volatility, known to have a long memory. Generally, the number of epochs needed to train extended LSTMs is divided by two, while the variation of validation and test losses among models with the same hyperparameters is much smaller. We also show that the model with the smallest validation loss systemically outperforms rough volatility predictions by about 20% when trained and tested on a dataset with multiple time series.
Damien Challet, Vincent Ragel
2023-08-04T14:24:57Z
http://arxiv.org/abs/2308.08550v1
# Recurrent neural networks with more flexible memory: better predictions than rough volatility ###### Abstract We extend recurrent neural networks to include several flexible timescales for each dimension of their output, which mechanically improves their abilities to account for processes with long memory or with highly disparate time scales. We compare the ability of vanilla and extended long short term memory networks (LSTMs) to predict asset price volatility, known to have a long memory. Generally, the number of epochs needed to train extended LSTMs is divided by two, while the variation of validation and test losses among models with the same hyperparameters is much smaller. We also show that the model with the smallest validation loss systemically outperforms rough volatility predictions by about 20% when trained and tested on a dataset with multiple time series. Time series Long memory Recurrent Neural Networks Rough Volatility Volatility modelling ## 1 Introduction Some time series in Nature have a very long memory (Robinson, 2003): fluid turbulence (Resagk et al., 2006), asset price volatility (Cont, 2001) and tick-by-tick events in financial markets (Challet and Stinchcombe, 2001; Lillo and Farmer, 2004). From a modelling point of view, this means that the current value of an observable of interest depends on the past by a convolution of itself with a long-tailed kernel. Deep learning tackles past dependence in time series with recurrent neural networks (RNNs). These networks are in essence moving averages of nonlinear functions of the inputs and learn the parameters of these averages and functions. Provided that they are sufficiently large, these networks can approximate long-tailed kernels in a satisfactory way, and are of course able to account for more complex problems than a simple linear convolution. Yet, their flexibility may prevent them to learn quickly and efficiently the long memory of time series. Several solutions exist: either one pre-filters the data by computing statistics at with various time scales and use them as inputs to RNNs in the same spirit as multi-time scale volatility modelling (Zumbach and Lynch, 2001; Corsi, 2009), see e.g. Kim and Won (2018), or one extends the neural networks so as to improve their abilities. For example, Zhao et al. (2020) adds delay operators, taking inspiration from the ARIMA processes, to the states of recurrent neural networks, while Ohno and Kumagai (2021) modifies the update equation of the network output so that its dynamics mimics that of a variable with a long memory. In both cases, the time dependence structure is enforced by hand in the dynamics of such networks. Here, we propose a flexible and parsimonious way to extend the long-memory abilities of recurrent neural networks by using an old trick for approximating long-memory kernels with exponential functions, which helps recurrent neural networks learn faster and better time series with long memory. Our main contributions are: (i) we introduce RNNs with several multiple flexible time scales for each dimension of the output; (ii) we show that learning to predict time series with long-memory (asset price volatility) is faster and more reliably good with more flexible time scales (iii) rough volatility predictions can be beaten by training a fair number of recurrent neural networks and only using the one with the best validation loss. Recurrent neural networks with more flexible memory: better predictions than rough volatility ## 2 Methods Let time series \(y_{t}\) be of interest. Its moving average can be written \[\tilde{y}_{t}=\int_{-\infty}^{t}K(t-t^{\prime})y_{t}^{\prime}dt^{\prime}, \tag{1}\] where \(K\) is a kernel. In a discrete time context, \[\tilde{y}_{t}=\sum_{-\infty}^{t}K(t-t^{\prime})y_{t}^{\prime}. \tag{2}\] When the process is Markovian, its kernel \(K(x)\simeq e^{-x/\tau_{0}}\) for large \(x\), where \(\tau_{0}\) is the slowest timescale at which the process forgets its past. In this case, one can write \(y_{t}\) in a recursive way \[\tilde{y}_{t}=\tilde{y}_{t-1}(1-\lambda)+\lambda y_{t}, \tag{3}\] where \(\lambda\simeq 1/\tau_{0}\); \(\tilde{y}_{t}\) is then an exponentially moving average (EMA) of \(y_{t}\). Long memory processes however, have a kernel that decreases at least as slowly as a power-law. In turn, power-laws can be approximated by a sum of exponential functions: naively, if \(K(x)=x^{-\alpha}\), one writes \[K(x)\propto\sum_{i=1}^{\infty}w_{i}\exp(-x/\tau_{i}) \tag{4}\] with \(w_{i}\propto(1/c^{\alpha})^{i}\) and \(\tau_{i}=c^{i}\) for a well-chosen constant \(c\): one covers the \(x\) space in a geometric way and the weights \(w_{i}\) account for the power-law decreasing nature of \(K(x)\). This rough approach works well and is widespread. Bochud and Challet (2007) propose a more refined method to determine how many exponential functions one needs to approximate optimally \(K\) and how to compute \(w_{i}\) for a given \(\alpha\) and for a given range of \(x\) over which the kernel has to be approximated by a sum of exponential functions (e.g. \(x\in[1,1000]\)). For example, one needs about 4 exponential functions to approximate 3 decades). Writing down the update equations of well-known recurrent neural network architectures makes it clear that they use exponentially moving averages with a single time scale for each output dimension. For example, Gate Recurrent Units (GRU) Cho et al. (2014) transform the input vector \(x_{t}\) into a vector of timescales \(\lambda_{t}\) defined as \[\lambda_{t}=\sigma(W_{\lambda}x_{t}+U_{\lambda}c_{t-1}+b_{\lambda}) \tag{5}\] which is then used in the update of the output \(c_{t}\) \[c_{t}=c_{t-1}\odot(1-\lambda)+\lambda\odot\tilde{c}_{t}, \tag{6}\] where the update \(\tilde{c}_{t}\) is also computed from the input with learned weights, i.e. \[\tilde{c}_{t} =\sigma_{c}(W_{c}x_{t}+U_{c}(c_{t-1}\odot r_{t})+b_{c}) \tag{7}\] \[r_{t} =\sigma_{r}(W_{r}x_{t}+U_{r}c_{t-1}+b_{r}) \tag{8}\] for a non-linear functions \(\sigma_{c}\) and \(\sigma_{r_{t}}\odot\) is the element-wise (Hadamard) product and \(r_{t}\) the reset gate which modifies the value of \(c_{t-1}\) when computing \(\tilde{c}_{t}\). By design, GRUs can only compute exponentially moving averages of \(\tilde{c}_{t}\), although they possess the interesting ability to learn both \(\lambda_{t}\) and the update \(\tilde{c}_{t}\) as a function of their inputs. It is straightforward to extend GRUs to an arbitrary number of timescales \(n\) by using \(n\)\(\tilde{c}_{t}^{(k)}\), \(k=1,\cdots,n\) and \[\lambda_{t}^{(k)} =\sigma(W_{\lambda}^{(k)}x_{t}+U_{\lambda}^{(k)}c_{t-1}^{(k)}+b_{ \lambda}^{(k)}) \tag{9}\] \[c_{t}^{(k)} =c_{t-1}^{(k)}\odot(1-\lambda_{t}^{(k)})+\lambda_{t}^{(k)}\tilde {c}_{t}, \tag{10}\] where each \(c_{t}^{(k)}\) is an exponentially moving average at time scale \(\sim 1/\lambda_{t}^{(k)}\). Finally, the output will be \[c_{t}=\sum_{k=1}^{n}w_{k}c_{t}^{(k)}, \tag{11}\] The simple \(\alpha-\)RNN (Dixon and London, 2021), which are simplified GRUs, share the same assumption of a single time scale per output dimension and thus can be generalized in the same way. Let us show now how extend LSTMs with a forget gate (Gers et al., 2000). Starting from their output \(h_{t}\), one has \[h_{t} =o_{t}\odot\sigma_{h}(c_{t}) \tag{12}\] \[c_{t} =f_{t}\odot c_{t-1}+i_{t}\odot\tilde{c}_{t}, \tag{13}\] where \(o_{t}\), \(i_{t}\), and \(\tilde{c}_{t}\) are determined from the input \(x_{t}\) and the previous output \(h_{t-1}\) with learned weights and \(\sigma_{h}\) is a nonlinear function. Writing \(f_{t}=1-\lambda_{t}\) makes it obvious that the cell vector \(c_{t}\) evolves in the same way as \(y_{t}\) in Eq. (6) if \(i_{t}\simeq\lambda_{t}\). Extending LSTMs to include \(n\) time scales by cell state dimension is therefore straightforward: one needs to compute \(n\) EMAs and their associated \(\lambda\)s as follows \[f_{t}^{(k)} =\sigma(W_{f}^{(k)}x_{t}+U_{f}^{(k)}h_{t-1}+b_{f}^{(k)}) \tag{14}\] \[i_{t}^{(k)} =\sigma(W_{i}^{(k)}x_{t}+U_{i}^{(k)}h_{t-1}+b_{i}^{(k)})\] (15) \[c_{t}^{(k)} =f_{t}^{(k)}\odot c_{t-1}^{(k)}+i_{t}^{(k)}\odot\tilde{c}_{t}, \tag{16}\] where \(\tilde{c}_{t}\) follows Eq. (11). Note that one could set \(i_{t}^{(k)}=1-f_{t}^{(k)}=\lambda_{t}^{(k)}\) and not learn the weights associated to \(i_{t}\). Learning as well \(i^{(k)}\) is equivalent to modulate the importance of the update, which is known as to as cognitive bias (Palmitneri et al., 2017): this is made clear by writing \(i_{t}^{(k)}=v_{t}^{(k)}(1-f_{t}^{(k)})=v_{t}^{(k)}\lambda_{t}^{(k)}\), where \(v_{t}^{(k)}\) is the modulation of learning speed. We will focus on the case \(n=2\): (11) amounts to \[c_{t}=c_{t}^{(1)}\odot\alpha+(1-\alpha)\odot c_{t}^{(2)}, \tag{17}\] where the vector \(\alpha\) is learnable and its components are bounded to the \([0,1]\) interval. We call LSTMs with several time scales (\(n>1\)) per dimension VLSTMs, which stands for very long short term memory. Note that LSTMs with a sufficiently large cell dimension \(N_{h}\) can in principle learn to superpose timescales in the same way as Eq. (11) by learning one time scale per dimension and using final dense layer to learn how to combine them. However, imposing constraints (or equivalently, injecting some known structure) is known to lead to faster learning and better results (e.g. Physics-guided deep learning, see Thuerey et al. (2021) for a review). Naively, when learning to predict a process that is not too noisy, we expect the difference between VLSTM and LSTM to be the highest when \(N_{h}=1\), i.e. precisely when LSTMs do not have the possibility to compute long-term averages and to decrease when \(N_{h}\) increases. ### Volatility prediction Given an asset price \(P_{t}\), its and its log return \(r_{t}=\log P_{t}-\log P_{t-1}\), the asset price volatility \(\sigma\) is defined as \(\sigma^{2}=E(r^{2})\). The dynamics of financial markets is ever-changing, which results in a temporal dependence of \(\sigma\) with clear patterns of long-term dependence (Cont, 2001) (see Fig. 1). Risk management, portfolio optimization. and option pricing benefit from the ability to predict the evolution of \(\sigma_{t}\). Fortunately, \(\sigma_{t}\) is relatively easy to predict, owing to its long memory (Cont, 2001): for example, its auto-correlation decreases very slowly, presumably as a power-law over more than a year. Econometric models include GARCH, whose simplest version involves only one timescale, while many variations use several time scales (Zumbach and Lynch, 2001; Corsi, 2009; Zumbach, 2015). Rough volatility (Gatheral et al., 2018), on the other hand, considers \(\log\sigma_{t}\) as fractional Brownian motion and thus includes all the time scales. As can be expected, rough volatility models outperform GARCH-based models for volatility prediction. Using LSTMs for volatility prediction is found e.g. in Kim and Won (2018); Filipovic and Khalilzadeh (2021); Rosenbaum and Zhang (2022) that use various types of predictors (including GARCH models) and architectures. Notably, Rosenbaum and Zhang (2022) show that the average prediction of 10 stacked LSTMs with past volatility and price return as predictors match the performance of rough volatility. ### Architecture and hyperparameters Our first aim is to characterise the effects of multiple time scales per cell dimension. Therefore, we compare simple non-stacked LSTMs with or without the proposed modification. Stacked LSTMs can learn additional time scales at the cost of doubling the number of parameters, which we precisely wish to avoid here. We pass the outputs \(h_{t}\) of the LSTMs and VLSTMs through a dense layer of size \(N_{h}\) with sigmoid activation functions, so as to combine the outputs in a non-trivial way, and a final dense layer with linear activation. Both final layers have a bias term, which allows the model to learn a baseline volatility level. We report a systematic study of the relative performance of LSTMs vs VLSTMs. We vary the sequence length \(T_{\text{seq}}\) from 10 to 100 by steps of 15, and the dimension of the hidden state \(N_{h}\in\{1,\cdots,5\}\). Finally, we train models with and without biases (except for the final two dense layers which always have biases). There are thus 70 variations of hyperparameters per architecture choice. For each hyperparameter and architecture couple, we train 20 networks which yields 2800 models altogether. We use a standard 60/20/20 train/validation/test splits and apply early stopping criterion of the minimum validation loss over the 5 last epochs, with a maximum of 1000 epochs. Batch size is set to 128. We train the networks to predict \(\log\sigma_{t+1}\) with an MSE loss function. Data comes from volatility computed by Oxford-Man Institute from intraday data with the two-scale realized kernel estimation method (Barndorff-Nielsen et al., 2008), which contain volatility time series for 31 indices and 2117 to 5385 data points per index (Heber et al., 2022). Since the volatility individual time series start and end at heterogeneous dates, we used the dates to define the train/validation/test splits: the train set ranges from 2000-01-04 to 2012-09-06, validation set from 2012-09-07 to 2016-11-23 and test set from 2016-11-24 to 2021-02-17. This is necessary as the time series are cross-correlated, hence, splitting them according to their respective length would cause information leakage from the future and thus overfitting. ## 3 Results ### Average loss Let us plot the average test loss of LSTMs and VLSTMs as a function of \(N_{h}\) at fixed \(T_{\text{seq}}\), the dimension of the memory cell, and of \(T_{seq}\) at fixed \(N_{h}\). This approach is taken by Rosenbaum and Zhang (2022) who trained 10 LSTMs instead of 20 here. Figure 2 shows that VLSTMs enjoy a sizeable advantage on average. We note that when \(N_{h}=1\), our initial intuition was correct: VLSTMs have a smaller average test loss for all variations of hyperparameters (\(T_{\text{seq}}\) and bias) Large loss fluctuations among models are associated with large average test losses for both VLSTMs and LSTMs; however test losses of VLSTMs are more likely to be small (and have accordingly small fluctuations). This is explained by a large difference in training convergence time, as shown below. We also note that, at least for volatility prediction, keeping bias terms in the computation of \(i\), \(f\), \(\bar{c}\), and \(c\) (referred to as internal biases henceforth) is manifestly problematic; it turns out to be the default option both for PyTorch and Keras and is probably implicitly used in other papers. On the whole, we note that a simple average of the outputs of an ensemble of models leads to quite large fluctuations, hence that the question of the convergence of the models must be investigated and a way to select the good models would much improve the usefulness of LSTMs in that context. Figure 1: Volatility of the SPX index (annualized) as a function of time, showing traces of long-term dependence. Data source: Oxford-Man Institute (Heber et al., 2022). Train convergence, it turns out, is a hit and miss process: some models are stuck in a high loss regime, while some models do learn a more realistic dynamical process and reach much lower losses. This yields a bi-modal density of losses (see Fig. 3). It is noteworthy that the fraction of VLSTMs that learn better is much larger. This is linked to the fact that VLSTMs learn much faster (see below) and that VLSTMs without internal biases are less likely to be stuck in a high loss regime. Since the volatility process is well approximated e.g. by a rough volatility model (Gatheral et al., 2018), the test loss is commensurate with the validation loss as expected, itself commensurate with the train loss. We plot in Fig. 3 the test loss versus the validation loss, which shows that test losses are accordingly also bimodal, with a majority of models not stuck in the high loss regime, some having a test loss smaller than rough volatility models. ### Keeping the better models This result suggests a way to select the good models, since the validation loss distribution is bimodal and since the test losses are roughly proportional to validation losses. To select models whose validation loss belongs in the lower peak, we compute 9 quantiles \(q(p)\) with regular sequence of probabilities \(p=0.1,\cdots,0.9\), and keep the models whose validation loss is smaller than the quantile corresponding to the maximum change between quantiles, a simple yet effective way to find well separated peaks. We call these models the better ones in the following. This procedure allows a fairer comparison between LSTMs and VLSTMs. VLSTMs are still better than LSTMs, even for larger \(N_{h}\). Figure 4 plots the average test loss of the models with below-average validation loss versus \(N_{h}\) and the sequence length. The test losses are now much closer, but VLSTMs still retain a sizable advantage: their test losses are both lower on average and their fluctuations are much smaller. Both the variability of results and the strange results for \(N_{h}=1\) when biases are allowed in the computation of the internal states of (V)LSTMs can be traced back to training convergence problems. A simple way to ascertain the main Figure 2: Volatility prediction. Upper plots: mean test loss vs the memory cell dimension \(N_{h}\) (\(T_{\text{seq}}=40\)); lower plots: mean test loss vs the sequence length \(T_{\text{seq}}\) (\(N_{h}=2\)). Left plots: (V)LSTMs with bias weights; right plots: (V)LSTMs with no bias weights. The dashed line is the average MSE of predictions made with rough volatility models. Recurrent neural networks with more flexible memory: better predictions than rough volatility Figure 4: Multiple volatility time series prediction test losses of the models with below-average validation losses. Upper plots: mean test loss vs the memory cell dimension \(N_{h}\) (\(T_{\text{seq}}=100\)); lower plots: mean test loss vs the sequence length \(T_{\text{seq}}\) (\(N_{h}=3\)). Left plots: (V)LSTMs with bias weights; right plots: (V)LSTMs with no bias weights. The dashed line is the average MSE of predictions made with rough volatility models. Figure 3: Left plot: density of validation losses by architecture. Right plot: test loss vs validation loss. Multiple volatility time series prediction. The dashed line is the average MSE of predictions made with rough volatility models. difference between VLSTMs and LSTMs is to measure the time it takes for their training to converge, i.e., to reach the early stopping criterion. Figure 5 reports the fraction of models that have converged as a function of the number of epochs (limited to 1000). LSTMs need more epochs to be trained. We also found that the case \(N_{h}=1\) and small \(T_{\text{seq}}\) is hard to learn for this kind of architecture, the training of many models requiring more than 1000 epochs to reach the early stopping criterion. ### Best model Finally, let us investigate the test loss of the model with the best validation loss among the 20 models trained for each of the 140 hyperparameters/architecture choices. It turns out that under these conditions VLSTMs and LSTMs have essentially the same performance. What differentiate them however is the speed at which they learn. Let us plot the test loss versus the time of convergence for LSTMs and VLSTMs with and without biases (left plot of Fig. 6): there is slight negative dependence between test loss and convergence times, the longer one learns, the better. Notably convergence times of LSTMs are spread all over the whole \([1,1000]\) interval, while VLSTMs converge before 400 epochs. The right plot of Fig. 6 displays the ECDF of the convergence times, which shows a sizable difference between LSTMs and VLSTMs: whereas 20% of LSTMs models do not manage to converge before 1000 epochs, all VLSTMs do before 400, except one, when biases are allowed. Thus, training a given number of models is significantly shorter with VLSTMs because they do not need to learn how to approximate the kernel \(K(x)\). One also sees that models with internal biases converge more slowly than those without them. We also wish to point out that because the fluctuation of validation losses among the trained models is much smaller for VLSTMs than for LSTMs, hence, that in practice, one needs to train fewer VLSTMs than LSTMs before finding a good one. ## 4 Conclusion Adding an explicit but flexible kernel structure to LSTMs brings significant improvements in every metric: number of epochs needed to reach convergence, overall prediction accuracy, and accuracy variation between models at fixed \begin{table} \begin{tabular}{l c c c} Architecture & bias & test loss & test loss \\ & & average & std dev. \\ \hline rough vol. & & 0.288 & \\ LSTM & yes & 0.241 & 0.032 \\ LSTM & no & 0.245 & 0.057 \\ VLSTM & yes & 0.232 & 0.017 \\ VLSTM & no & 0.230 & 0.015 \\ \end{tabular} \end{table} Table 1: Standard deviation of the test losses of the better models computed over all the values of \(N_{h}\) and \(T_{\text{seq}}\). Multiple time series volatility prediction Figure 5: Fraction of models having converged before a given number of epochs. Left plot: \(N_{h}=1\), right plot: all \(N_{h}\); multiple time series volatility prediction, \(T_{\text{seq}}=100\). parameters. There is a cost as the number of trainable parameters of VLSTMs is larger than LSTMs are fixed hyperparameters, but doubling the number of time scales does not require to double the number of trainable parameters, thanks to the explicit kernel approximation structure. Although this paper focuses on LSTMs, the same idea can be applied to GRUs and \(\alpha-\)RNNs in a straightforward way. Our results mirror those of Rosenbaum and Zhang (2022): we also succeeded in training a single model at a time on many volatility time series of various underlying asset types. This reflects the universality of volatility dynamics, a fact also hinted at by rough volatility and multi-scale GARCH-like models (Zumbach, 2015). While it is hard to beat rough volatility for volatility prediction, we found that even simple LSTMs can beat it, provided that one trains several models and selects the best one according to its validation loss. Using LSTMs for that purpose requires to train more models over more epochs than using VLSTMs. Volatility prediction can be further improved by adding some more features, such as prior knowledge of predictable special events, and possibly by using more complex neural architectures. ## 5 Code and data availability Full code, including the Keras VLSTM class, and data, are available at [https://github.com/damienchallet/VLSTM](https://github.com/damienchallet/VLSTM). ## Acknowledgments This work used HPC resources from the "Mesocentre" computing center of CentraleSupelec and Ecole Normale Superieure Paris-Saclay supported by CNRS and Region Ile-de-France.
2304.01879
A neural network-based scale-adaptive cloud-fraction scheme for GCMs
Cloud fraction significantly affects the short- and long-wave radiation. Its realistic representation in general circulation models (GCMs) still poses great challenges in modeling the atmosphere. Here, we present a neural network-based diagnostic scheme that uses the grid-mean temperature, pressure, liquid and ice water mixing ratios, and relative humidity to simulate the sub-grid cloud fraction. The scheme, trained using CloudSat data with explicit consideration of grid sizes, realistically simulates the observed cloud fraction with a correlation coefficient (r) > 0.9 for liquid-, mixed-, and ice-phase clouds. The scheme also captures the observed non-monotonic relationship between cloud fraction and relative humidity and is computationally efficient, and robust for GCMs with a variety of horizontal and vertical resolutions. For illustrative purposes, we conducted comparative analyses of the 2006-2019 climatological-mean cloud fractions among CloudSat, and simulations from the new scheme and the Xu-Randall scheme (optimized the same way as the new scheme). The network-based scheme improves not only the spatial distribution of the total cloud fraction but also the cloud vertical structure (r > 0.99). For example, the biases of too-many high-level clouds over the tropics and too-many low-level clouds over regions around 60{\deg}S and 60{\deg}N in the Xu-Randall scheme are significantly reduced. These improvements are also found to be insensitive to the spatio-temporal variability of large-scale meteorology conditions, implying that the scheme can be used in different climate regimes.
Guoxing Chen, Wei-Chyung Wang, Shixi Yang, Yixin Wang, Feng Zhang, Kun Wu
2023-04-04T15:33:39Z
http://arxiv.org/abs/2304.01879v2
# A Neural Network-Based Scale-Adaptive Cloud-Fraction Scheme for GCMs ###### Abstract Cloud fraction (CF) significantly affects the short- and long-wave radiation. Its realistic representation in general circulation models (GCMs) still poses great challenges in modeling the atmosphere. Here, we present a neural network-based (NN-based) diagnostic scheme that uses the grid-mean temperature, pressure, liquid and ice water mixing ratios, and relative humidity to simulate the sub-grid CF. The scheme, trained using CloudSat data with explicit consideration of grid sizes, realistically simulates the observed CF with a correlation coefficient >0.9 for liquid-, mixed-, and ice-phase clouds. The scheme also captures the observed non-monotonic relationship between CF and relative humidity and is computationally efficient, and robust for GCMs with a variety of horizontal and vertical resolutions. For illustrative purposes, we conducted comparative analyses of the 2006-2019 climatological-mean cloud fractions among CloudSat, and simulations from the NN-based scheme and the Xu-Randall scheme (optimized the same way as the NN-based scheme). The NN-based scheme improves not only the spatial distribution of the total CF but also the cloud vertical structure. For example, the biases of too-many high-level clouds over the tropics and too-many low-level clouds over regions around 60\({}^{\circ}\)S and 60\({}^{\circ}\)N in the Xu-Randall scheme are significantly reduced. These improvements are also found to be insensitive to the spatio-temporal variability of large-scale meteorology conditions, implying that the scheme can be used in different climate regimes. Cloud fraction (CF) is crucial to cloud climate effects of both reflecting shortwave radiation and absorbing/emitting longwave radiation. However, the simulation of CF in general circulation models (GCMs) has been difficult, because most clouds are smaller than the typical scales of GCM grids and cannot be resolved by the grid-scale physics, while the physical understanding of sub-grid processes is still inadequate. Thus, this study uses a data-driven approach, that is, a neural network, to parameterize the sub-grid CF in climate models. The database for training and evaluating this NN-based scheme is obtained by upscaling the CloudSat (quasi-) observational data and emulating the GCM "grid-mean" properties required for cloud-fraction parameterization, minimizing the data-oriented biases. Moreover, the effects of the GCM horizontal and vertical grid sizes are both considered in the network, increasing the scheme adaptivity for use in GCMs with different resolutions. Results show that the NN-based scheme correctly predicts observed features of cloud-fraction variation with cloud condensate content and relative humidity for clouds of different phases and better predicts total cloud-fraction spatial distribution and cloud vertical structure than the conventional Xu-Randall scheme. This suggests that the NN-based scheme has the potential to reduce the biases of cloud radiative effects existing in current GCMs. ## 1 Introduction Clouds play important roles in the Earth climate system. They dominate the energy budget by reflecting shortwave radiation and trapping longwave radiation (Wild et al., 2019), participate in the hydrological cycle via precipitation, and alter mass and energy vertical profiles by cloud venting (G. Chen et al., 2012; Yin et al., 2005) and latent-heat release. On the other hand, clouds are strongly coupled with aerosols and meteorology (Stevens & Feingold, 2009), involving complex feedbacks that span several temporal and spatial scales (e.g., G. Chen, Wang, et al., 2018; G. Chen, Yang, et al., 2018; Lau et al., 2006; Song et al., 2019; Xue et al., 2008). However, clouds are sub-grid scale in nature for current climate models, and cloud macro- and micro-physical properties in models have to be parameterized using the grid-mean atmospheric properties, making clouds the main source of most uncertainties in studies on climate change and climate modeling (e.g., Caldwell et al., 2016; Stevens et al., 2016). Specifically, cloud-fraction parameterization poses a major challenge in climate modeling. Findings in the recent Coupled Model Intercomparison Project Phase 6 (CMIP6) revealed that current general circulation models (GCMs) have biases in both the daily-mean CF and the cloud diurnal variation. For example, for the former, the simulated CF is too small over the tropics, extra-tropiess, and midlatitude regions (Li et al., 2021; Vignesh et al., 2020); and for the latter, the CF over land is too small during the daytime and too large during the nighttime (G. Chen et al., 2022; G. Chen and Wang, 2016). As the shortwave cloud radiative effect (SWCRE) occurs only during the daytime while the longwave cloud radiative effect (LWCRE) persists throughout the daytime and nighttime, the biases in CF can affect not only the energy balance but also the energy diurnal variation, which may have implications on atmospheric variations of longer time scales (e.g., Ruppert, 2016; Slingo et al., 2003). Current parameterization schemes of CF can be divided into diagnostic and prognostic approaches. In the diagnostic approach, the sub-grid CF is usually a function of the instantaneous grid-mean properties such as atmospheric stability, relative humidity, and/or cloud water content (e.g., Shiu et al., 2021; Sundqvist et al., 1989; Xu and Randall, 1996). These schemes are easy to implement and computationally efficient but may be too simple. On the other hand, the prognostic approach, which explicitly simulates the temporal variance of CF using source and sink terms associated with advection, cumulus convection, stratiform condensation, evaporation, and precipitation (Park et al., 2016; Tiedtke, 1993; Tompkins, 2002; Wilson et al., 2008), seems more physical. However, all source/sink terms are empirically related to the grid-mean properties, which are difficult to verify with observations. In addition, climate models usually employ different horizontal and vertical resolutions, which may affect the sub-grid statistical characteristics and thus the cloud-fraction parameterization. In recent years, deep-learning methods have been demonstrated to be an effective alternative approach in atmospheric modeling (Chantry et al., 2021; Schultz et al., 2021). They can help reveal associations among atmospheric parameters by learning directly from data, regardless of inadequate or even none prior domain knowledge. For example, the feedforward neural network (also called multiple-layer perceptron, hereafter denoted as neural network for short) is good at emulating complex functions and can be used to replace certain modules by either accounting for poorly-understood processes or saving computational cost. It has been used to parameterize processes such as cloud cover (Grundner et al., 2022), radiation (Krasnopolsky and Fox-Rabinovitz, 2006), convection (Han et al., 2020; Rasp et al., 2018; T. Zhang et al., 2021), and boundary-layer turbulence (J. Wang et al., 2019). All yield promising outcomes. For the same reasons, the networks with complex architectures exhibit superiorities in building tools for specific predictions such as E1N (Han et al., 2019; Noetboom et al., 2018), precipitation (G. Chen and Wang, 2022; Ravuri et al., 2021; Shi et al., 2015) and clouds (J. Zhang et al., 2018), and for data processing (Kim et al., 2021; Leinonen et al., 2021; Pan et al., 2019, 2021, 2022; Rasp and Lerch, 2018; Y. Zhang et al., 2021). In this study, we introduce a neural network-based (NN-based) diagnostic scheme for simulating CF in climate models. This scheme has advantages in at least three aspects. First, using the neural network avoids the non-physical assumption of closed-form expressions in the parameterization; second, the data for developing the scheme were obtained from CloudSat observations of cloud profiles and the associated large-scale meteorology conditions, minimizing the data-oriented scheme biases; and third, the scale adaptivity (both horizontally and vertically) is considered while developing the scheme so that the scheme has higher robustness across models with different resolutions and in models with varying resolutions such as the Model for Prediction Across Scales-Atmosphere (MPAS; Skamarock et al., 2012) and the Global-to-Regional Integrated forecast SysTem atmosphere model (GRIST; Y. Zhou et al., 2020). The rest of this manuscript is arranged as follows: Section 2 describes the data preparation; Section 3 introduces the NN-based scheme and the conventional Xu-Randall scheme (Xu and Randall, 1996), the latter of which is used as a baseline for evaluating the NN-based scheme; Section 4 tests the NN-based scheme in aspects of accuracy and scale adaptivity within the context of a machine-learning study while Section 5 evaluates the NN-based scheme with multiple-year CloudSat data using an offline application; and lastly, the summary and discussion are given in Section 6. Figure 1 presents the availability of CloudSat data at the time of our analysis. As CloudSat completes around 14.6 revolutions around Earth per day, there are around 5,000 granules of observations annually in normal years like 2007-2010 and 2013-2016. In 2011-2012 and 2017-2019, the CloudSat observation was interrupted occasionally due to reasons such as battery anomalies or orbit maneuvers ([https://cloudsat.atmos.colostate.edu/news/CloudSat_status](https://cloudsat.atmos.colostate.edu/news/CloudSat_status)), yielding fewer data in these years. Meanwhile, the granule amount is relatively larger in the boreal summer and autumn than in the winter and spring. This may imply that the data is less representative of cloud climatology in the winter and spring seasons, but it does not affect the robustness of our scheme. It is shown below that the available data are more than enough for the scheme development and evaluation. ### Data Preprocessing Typical GCM grids have horizontal sizes (\(\Delta x\)) of 10-10s km (much larger than the CloudSat horizontal resolution) and vertical sizes (\(\Delta z\)) of 0.1-1s km (mostly larger than the CloudSat vertical resolution). Thus, the CloudSat clouds can be considered as sub-grid clouds at GCM grids, and the CloudSat data can be upscaled/aggregated to emulate atmospheric properties and CF simulated at GCM grids for the input and output of a cloud-fraction parameterization scheme. During upscaling, the "grid-mean" temperature, specific humidity, atmospheric pressure, and liquid/ice water contents are calculated by averaging the raw CloudSat data within the range of \(\Delta x\times\Delta z\), while the sub-grid CF is calculated as the horizontal cloud area fraction (i.e., each profile in the cloudy subarea must have one or more of the 240-m layers with CPR cloud mask \(\geq\)30 within \(\Delta x\times\Delta z\)). The definition of this sub-grid CF is consistent with that in most radiation parameterizations, and the resulting values are usually larger than the cloud volume fraction required by certain microphysical parameterizations (Grundner et al., 2022). More details of the upscaling process can be found in Y. Wang et al. (2023). In addition, to account for the different horizontal and vertical resolutions of GCMs, the upscaling is carried out for 42 different \(\Delta x-\Delta z\) combinations. Therein, \(\Delta x\) spans 40-100 km with an increment of 10 km while \(\Delta z\) spans 10-60 hPa with an increment of 10 hPa, covering most grid sizes that may be seen in current GCMs. It is noticed that the amount of available CloudSat data is far more than enough for the scheme development. For example, upscaling the data in 2015 to the resolution of 100 km \(\times\) 20 hPa (hereafter x100z20 for short) can yield more than 10 million cloudy samples, which is more than sufficient for training a neural network with a simple architecture and relatively-few trainable parameters (see Section 3.1). Therefore, to save training time, we randomly draw 100 granules from the data in 2015 when upscaling to each set of resolutions. The choice of 2015 is arbitrary and using the data in any other year or multiple years does not affect the training results. Upscaling to 42 sets of resolutions yields more than 10 million cloudy samples in total. These samples are randomly split into Figure 1: Year-by-year variation of CloudSat data amount used in this study. The colored segments indicate the portions of winter (December–January–February), spring (March–April–May), summer (June–July–August), and autumn (September–October–November), respectively. In this study, we train an isolation-forest model for clouds of each phase (i.e., ice, mixed-phase, and liquid) using 100,000 samples randomly drawn from the CloudSat data upscaled at the resolution of x100z20 for the whole year of 2015. The inputs for the models are sub-grid CF and relative humidity for the respective phases, where the saturated water vapor mixing ratio for the mixed phase is set to the weighted mean of those over ice and liquid surfaces. The contamination ratio, a parameter setting the proportion of anomalous samples in the dataset, is set to 0.01 for clouds of all three phases. Figure 2 compares the sample number distributions of the non-screened and screened datasets. The anomalous samples exist in clouds of all three phases (Figures 1(a), 1(c), and 1(e)). The screening only removes samples with large CF but small relative humidity and does not affect the rest samples (Figures 1(b), 1(d), and 1(f)). To further demonstrate the effect of screening, we examine an example of the upscaled CloudSat observation by the granule No. 03307. Shown in Figure 3, many samples in the upper portion of the cloud layer have the sub-grid CF (numbers in the figure) larger than 0.5 and the relative humidity (colored shadings in the figure) lower than 0.2, and the screening successfully distinguishes these anomalous samples (marked with red strikethroughs). Note that the data screening is only to remove samples that we consider to be unrealistic. The data screening certainly does not affect the resulting scheme simply because the anomalous samples take a very small fraction Figure 2.— Effect of the isolation-forest screening on the data population. The shadings indicate sample numbers in the upscaled CloudSat data at the resolution of x100z20 for the whole year of 2015. An isolation-forest model is trained for clouds of each phase (i.e., ice, mixed, and liquid). The inputs of the models are the relative humidity of the respective phases and the sub-grid cloud fraction, and the contamination ratio is set to 0.01. of the CloudSat data (Figure 2). Likewise, increasing the contamination ratio also has little effect on the scheme results over the normal samples that dominate the data population. For example, setting the contamination ratio to 0.05 further reduces the number of samples having large CF at small relative humidity but does not cause any evident changes to the variation of CF with either relative humidity or cloud condensate mixing ratio (shown in Figures S1-S2 in Supporting Information S1). ## 3 Scheme Description This section describes the neural network architecture and the algorithm of the Xu-Randall scheme. The database is the upscaled CloudSat data for 42 sets of resolutions (Section 2.2). The training, validation, and test datasets include around 5.4, 1.8, and 1.8 million samples, respectively. The former two datasets are used for the network-based scale-adaptive (NSA) scheme development and the Xu-Randall scheme tuning, while the test dataset is used for the scheme testing in Section 4. ### Architecture of the Neural Network-Based Scale-Adaptive Cloud-Fraction Scheme Figure 4 presents the architecture of the NN-based scale-adaptive (NSA for short) cloud-fraction scheme (Figure 4a). The network is composed of one input layer, a tunable number \(n\) of hidden layers, one output layer, and the ReLU activation layers (not shown in the figure) following the input layer and hidden layers. The neuron amounts in each hidden layer (\(N_{\text{\emph{i}}w}\), \(i=1,2,\ldots,n\)) are also tunable hyperparameters. The loss function is the mean square error. The input layer consists of eight variables: air pressure (\(P\)), air temperature (\(T\)), liquidface water mixing ratios (\(Q_{\text{\emph{i}}}/Q_{\text{\emph{j}}}\)), relative humidity over liquidface surfaces (\(R_{\text{\emph{i}}}/R_{\text{\emph{n}}}\), calculated with the upscaled atmospheric pressure, temperature, and specific humidity), and the horizontal/vertical grid sizes (\(\Delta\sigma\Delta z\)) of the host GCM. Therein, the first six variables are chosen because of their close association with cloud formation based on domain knowledge. They are also used in the Xu-Randall scheme, making it fair to compare results from the NSA and the Xu-Randall schemes in the following sections. \(\Delta x\) and \(\Delta z\) are to implicitly include the effects of grid sizes on the sub-grid cloud 3D structure (e.g., cloud sizes and vertical overlap) and thus the sub-grid cloud cover. As shown below, it provides the scheme with the flexibility of use at a variety of GCM resolutions. We are aware that the input size could be reduced by replacing \(R_{\text{\emph{i}}}\) and \(R_{\text{\emph{i}}}\) using the specific humidity (\(Q_{\text{\emph{i}}}\)). However, using \(Q_{\text{\emph{i}}}\) causes a larger burden Figure 4: Architecture of the neural network-based scale-adaptive cloud-fraction parameterization. The output of the network is bounded to 0–1 before the analysis. Figure 3: Effect of the data screening on data samples, demonstrated using the CloudSat observation at 12\({}^{\circ}\)–28\({}^{\circ}\)N by the granule No. 03307 on 11 December 2018, upscaled at x100x20. Numbers in the figure indicate observed sub-grid cloud fraction, colored shadings indicate relative humidity over the liquid surface, and gray shadings indicate no valid observations. The detected anomalous samples are marked with red strikethroughs. for the neural network, and the resulting network should have more hidden layers or more neurons to get training accuracy close to that of the network using \(R_{L}\) and \(R_{r}\). Thus, using \(R_{L}\) and \(R_{t}\) is a choice out of computation efficiency. The output layer has only one neuron, that is, the sub-grid cloud fraction (CF). We utilize one single network to parameterize the CF of different phases for simplicity. As the CF variation with \(Q_{C}\) and relative humidity is sensitive to the cloud phase (Y. Wang et al., 2023; also shown below in Figure 6), using separate networks for clouds of different phases may yield better results. However, using multiple networks increases the coding difficulty and the memory cost for storing the network parameters. Thus, the multiple-network approach is not taken in this study. The hyperparameters are determined by minimizing the validation accuracy. We have tested the neuron number in each hidden layer (8, 16, 32, 64, 128, 256, 512), the optimizer (Adam vs. SGD), the learning rate (0.1, 0.01, 0.001), the batch size (256, 512, 1,024) and the activation function (Sigmoid vs. ReLU) when preparing this manuscript. In the end, we set \(n\) to 2, \(N_{h1}\) and \(N_{h2}\) both to 128. The network has 17,793 trainable parameters and yields an root-mean square error (RMSE) of 0.130 over the validation dataset. Below, the scheme testing and the offline application of the NSA scheme are both based on this setting. ### Tuned Xu-Randall Scheme The Xu-Randall scheme is a conventional diagnostic cloud-fraction parameterization scheme for use in climate models. It is the default cloud-fraction scheme in the Weather Research and Forecast (WRF) Model and has been used in many weather and climate studies (e.g., G. Chen et al., 2021; G. Chen, Yang, et al., 2018; Song et al., 2019; Yang et al., 2018). The scheme is also employed in the FGOALS-f3 GCM developed by the Institute of Atmospheric Physics, Chinese Academy of Sciences (L. Zhou et al., 2015). We obtained its code from the WRF Model Version 3.7.1 and use it as a baseline for evaluating the NSA scheme. Below we briefly introduce the Xu-Randall scheme while more details can be found in the paper by Xu and Randall (1996). The Xu-Randall scheme was built based on the simulations of a cloud ensemble model using a curve-fitting approach. It assumes the sub-grid CF is a function of the grid-mean relative humidity (\(R\)) and cloud condensate mixing ratio (\(Q_{C}\), i.e., \(Q_{t}\) for ice clouds, \(Q_{t}\) for liquid clouds, and \(Q_{t}+Q_{L}\) for mixed-phase clouds). The function formula is \[\text{CF}=R^{\beta}\left[1-\exp\left(-\frac{\alpha Q_{C}}{[(1-R)Q* \mathbb{I}^{\gamma})}\right)\right],\text{if }R<1 \tag{1}\] \[\text{CF}=1,\text{if }R\geq 1 \tag{2}\] where \(Q*\) is the grid-mean water vapor saturation mixing ratio, \(\alpha=100\), \(\beta=0.25\), and \(\gamma=0.49\) are empirical parameters determined through curve-fitting based on the simulation dataset. As the Xu-Randall scheme was built on a completely different dataset than the NSA scheme, it would be unfair to expect the scheme produces results close to the CloudSat observations. Therefore, we have tuned the scheme using the same procedure and the same datasets (i.e., the aforementioned training and validation datasets) that are used for training the NSA scheme, yielding \(\alpha=70.3378\), \(\beta=0.0507\), and \(\gamma=0.6315\). Below we mainly present results from the tuned Xu-Randall scheme, and results of the original Xu-Randall scheme are also included in Supporting Information S1 for interested readers. ## 4 Scheme Testing This section evaluates the NSA scheme using the test dataset within the context of a machine-learning study. Results of the cloud spatial and temporal characteristics predicted by the scheme are given in Section 5. ### Accuracy Figure 5 presents the joint probability density distribution of CFs from the scheme predictions and the CloudSat observations for clouds of different phases, estimated based on the test dataset. The NSA scheme is shown to well predict CFs for clouds of all three phases and has relatively larger biases (underestimation) when the observed CF is close to 1 (Figures 5a, 5c, and 5e). This could suggest the effect of the data imbalance. However, we found that balancing the CF frequency distribution by undersampling the training data could not much improve the results. This implies that the large-CF samples may have larger uncertainties or larger inherent inconsistencies, making the scheme difficult to learn. It is noticed that the correlation coefficient (\(r\), larger is better) is the largest for the mixed-phase clouds, while the RMSE (smaller is better) is the smallest for the liquid clouds. The two metrics are not consistent for two reasons. First, the fraction of large-CF samples is much larger in ice and mixed-phase clouds than in liquid clouds. Large-CF samples contribute more to the RMSE than small-CF samples, causing the relatively larger RMSE in ice and mixed-phase clouds. Second, the CloudSat data for liquid clouds might have large uncertainties. Liquid clouds are mostly slited in the lower troposphere, where the atmospheric properties are dominated by turbulence and difficult to simulate by GCMs. Therefore, air temperature and humidity from the ECMWF-AUX may have large uncertainties in the lower troposphere, causing the smaller \(r\) for the CF prediction of liquid clouds. In contrast, the tuned Xu-Randall scheme yields larger RMSE and smaller correlation coefficient for clouds of all three phases. The biases over ice clouds are the largest, where the RMSE is 0.366, more than twice that of the NSA scheme, and the correlation coefficient is less than 0.6. It is noticed that the original Xu-Randall scheme reaches better results over the liquid clouds but worse results over the mixed and ice clouds (shown in Figure S3 in Supporting Information S1) than the tuned one. This indicates the tuning process has opposite effects on the scheme performance over liquid versus mixed and ice clouds and suggests that the sub-grid statistics could be sensitive to the cloud phases. Moreover, even when we tune the scheme individually for each upscaled resolution, the scheme still exhibits obvious inferiorities to the NSA scheme at all resolutions (shown in Figure S4 in Supporting Information S1). Figure 6 shows the variation of CF with relative humidity and cloud condensate mixing ratio from the CloudSat observations and the scheme predictions. In the observations, the ice CF is dominated by \(R\) (Figure 6a), the liquid CF is dominated by \(Q_{c}\) (Figure 6i), and the mixed CF is sensitive to both factors may underestimate CF when \(Q_{c}\) unchanged, the CF variation with \(R\) is non-monotonic for Figure 6e vs. Figure 6f). Nevertheless, the occurrence frequency of the associated samples (i.e., samples above the black contours in Figures 6d, 6h, and 6l) is very low. The tuned Xu-Randall scheme prediction presents similar patterns for clouds of three phases, in line with the same formula (i.e., Equations 1 and 2) across different cloud phases. The patterns for liquid and mixed clouds (Figures 6g and 6k) are close to the CloudSat observations, consistent with the relatively larger correlation coefficients and smaller RMSEs in Figures 5d and 5f. However, the pattern for ice clouds (Figure 6c) is quite different from the observed one, showing the scheme underestimates CF for \(Q_{c}\)\(<\) 50 mg kg\({}^{-1}\) and overestimates CF for \(R\)\(>\) 1. Results for the original Xu-Randall scheme are similar to the tuned one (figure not shown). Neither the Figure 5. Joint probability density distribution between cloud fractions from the scheme predictions and the CloudSat observations for clouds of different phases (ice-only, mixed, and liquid-only), estimated based on the test dataset: (left) results from the network-based scale-adaptive (NSA) scheme; and (right) results from the tuned Xu-Randall scheme. The lower-right numbers indicate the correlation coefficient (\(r\)) and root-mean square error (RMSE) of the predictions with respect to the observations for each type of clouds. original nor the tuned Xu-Randall scheme can capture the non-monotonic variation of CF with \(R\), which is not included in the parameterization formula. ### Scale Adaptivity To examine whether considering \(\Delta x\) and \(\Delta z\) improves the NSA scheme performance, we take two additional NN-based schemes as baselines: N and Nx100220. Both schemes have the same network architecture as the NSA scheme except for excluding \(\Delta x\) and \(\Delta z\) from the input layer. The N scheme is trained based on the same dataset as the NSA scheme, while the Nx100z20 is trained based on the upscaled CloudSat at the resolution of x100z20 for the whole year of 2015 (no random drawing), where the sample amount is similar to that of the database for training the NSA scheme. Below we compare the three NN-based schemes based on the same test dataset that is used above in Figures 5 and 6. Figure 7 compares the RMSE of the three NN-based schemes for samples with different horizontal and vertical resolutions. In the NSA scheme, the RMSE is not sensitive to \(\Delta x\) or \(\Delta z\) for ice clouds (Figure 7a), slightly sensitive to \(\Delta z\) for mixed clouds (Figure 7d), and sensitive to both resolutions for liquid clouds (Figure 7g), where the RMSE tends to increase with \(\Delta z\) and decreases with \(\Delta x\). When \(\Delta x\) and \(\Delta z\) are excluded from the input layer, the N scheme has larger RMSEs for clouds of all three phases, especially the ice and mixed clouds (Figures 7b and 7e). As the deficiency of the N scheme may be attributed to the inherent inconsistency of the training database associated with different resolutions, we further examine the results of the Nx100z20, where the database inherent Figure 6: Comparisons of cloud fraction as a function of cloud condensate mixing ratio and relative humidity between the CloudSat observations (a, e, and i), the network-based scale-adaptive (NSA) scheme (b, f, and j), and the tuned Xu-Randall scheme (c, g, and k) for clouds of different phases (ice-only, mixed, and liquid only), estimated based on the test dataset. The right column (d, h, and l) indicates the joint probability density distribution of respective cloud types in the test dataset. Black lines indicate contours of the probability density of 0.001 and 0.0001, below which lie more than 88% and 97% of the corresponding sample populations, respectively. consistency is warranted. However, it shows that the RMSEs of the Nx100z20 scheme are even larger (Figures 7c, 7f, and 7i), and close to those of the NSA scheme only when \(\Delta x\) is the largest and \(\Delta z\) is the smallest. Therefore, we can conclude that both horizontal and vertical resolutions affect the sub-grid statistics of CF, and that including \(\Delta x\) and \(\Delta z\) in the scheme greatly increases the scale adaptivity of the NSA scheme, leading to higher robustness for use in GCMs with different horizontal and vertical resolutions. Figure 8 presents the mean CF at different horizontal and vertical resolutions from the CloudSat observations and the scheme predictions to show more details of how \(\Delta x\) and \(\Delta z\) may affect the CF parameterization. In the CloudSat observations (Figures 8a, 8e, and 8i), the CF tends to increase with \(\Delta z\) and decrease with \(\Delta x\). The NSA scheme well captures this feature, and the biases are less than 0.01 at all resolutions (Figures 8b, 8f, and 8j). In contrast, the biases of the N scheme are larger than 0.01 for most resolutions and can have values up to 0.09. Meanwhile, the biases are generally symmetric about the diagonal, that is, negative in the upper left and positive in the lower right. This is not a surprise. The scheme does not consider the resolution variability in the training database, and consequently the scheme biases tend to be the smallest in the middle of the resolution ranges and increase when the resolutions go toward either end. When we remove the inherent inconsistency associated with resolutions from the training database, the Nx100z20 scheme only shows better performance for samples with resolutions close to x100z20 (i.e., \(\Delta x\) in 80-100 km and \(\Delta z\) in 10-20 hPa). The bias increases rapidly with the decrease of \(\Delta x\) and the increase of \(\Delta z\), with the largest value of 0.14. Hence, it is further confirmed that the NSA scheme has good scale adaptivity. Figure 7: Comparisons of root-mean square error (RMSE) at different horizontal and vertical resolutions between the predictions of three neural network-based schemes estimated based on the test dataset. Both N and Nx100z20 schemes have the identical architecture to the network-based scale-adaptive (NSA) scheme except for excluding \(\Delta x\) and \(\Delta z\) from the input layer. The N scheme is trained with the same database as the NSA scheme, while the Nx100z20 scheme is trained based on the upscaled CloudSat data at the resolution of x100z20 for the whole year of 2015. ## 5 Offline Application This section compares the climatology of cloud spatial and temporal variability simulated by the NSA scheme with those from the CloudSat observations and the tuned Xu-Randall scheme using an offline method. The NSA and the tuned Xu-Randall schemes both take the upscaled CloudSat data at the resolution of x100z20 for the period of 2006-2019 as inputs to make the respective predictions. We first get an intuitive understanding of the scheme biases (Figure 9), then examine the generalizability of the NSA scheme (Figure 10), and last assess the scheme performance based on 2006-2019 multiple-year mean results (Figures 11-13). We take the offline mode rather than incorporating the two schemes into a host model to exclude the distractions that could be caused by other model components (e.g., the simulated cloud condensate mixing ratios in the host model may deviate too much from the observations, Jiang et al., 2012) and possible feedbacks. Hence, the scheme-observation discrepancies can be mostly attributed to the scheme deficiencies, and the inter-scheme differences are not blurred by the possible compensating biases within the host model. Figure 9 presents the observed and modeled cloud fractions in a CloudSat granule to get a better understanding of the data upscaling and an intuitive view of the scheme biases. In the raw CloudSat data (Figure 9a), a binary CF is determined by the CPR cloud mask: 1 for CPR cloud mask \(\geq\)30 and 0 otherwise. When upscaled (Figure 9b), the sub-grid CF has decimal values within 0-1. The upscaled CF may not have valid values at certain grids (indicated by light gray in Figure 9a) due to two reasons. First, the raw CloudSat does not have valid estimates for liquid or ice cloud water content; and second, none of the raw CloudSat data falls in the range of \(\Delta z\) because the pressure change in 240 m exceeds \(\Delta z\) (this situation occurs mainly in the lower atmosphere where the vertical gradient of atmospheric pressure is large). The NSA and the tuned Xu-Randall schemes both well predict the general distribution of the sub-grid CF (Figures 9c and 9d) but exhibit different characteristics in the scheme-observation discrepancies. The NSA scheme tends to underestimate CF when CF is close to 1 (consistent with results in Figures 5 and 6), while the Xu-Randall scheme overestimates CF at many grids (particularly the high-level CF that are mostly ice clouds) because of its treatment of CF at supersaturation. The Xu-Randall scheme assumes that CF equals 1 when the Figure 8: Comparisons of mean cloud fraction at different horizontal (\(\Delta\alpha\)) and vertical (\(\Delta z\)) resolutions between the CloudSat observations and the predictions of three neural network-based schemes, estimated based on the test dataset. The numbers indicate the prediction-observation differences (prediction minus observation) with absolute values larger than 0.01. Figure 12 presents the zonal-mean vertical structure of CF from the observations and the scheme predictions. In the tuned Xu-Randall scheme, the CF overestimation in the tropics is associated with the too-large high-level CF (clouds above 440 hPa), while the CF overestimation in regions around 60degS and 60degN is due to the too-large low-level CF (clouds below 680 hPa). The original Xu-Randall scheme overestimates the high-level CF in the tropics as well but underestimates the low-level CF in regions around 60degS and 60degN (Figure 5b in Supporting Information S1). The tuning causes larger CF at all levels and yields better mid-level CF but worse low-level CF, which is consistent with the results on mixed and liquid clouds shown in Figure 5 and Figure S3 in Supporting Information S1. The NSA scheme predicts smaller high-level CF in the tropics and smaller mid- and low-level CFs in regions around 60degS and 60degN and reaches a larger spatial correlation (0.998) and a smaller RMSE (0.006) than both the original and tuned Xu-Randall schemes. Figure 13 further examines the seasonal variations of the cloud vertical structure at different latitudes. In the observations, the cloud vertical structure varies with the seasonal migration at all latitudes. While both schemes can predict the general variation, the NSA scheme shows better results (i.e., larger correlation coefficient and smaller RMSE) than the original (Figure 56 in Figure 10: Cloud-fraction root-mean square error (RMSE) year-by-year variation of the network-based scale-adaptive (NSA) and the tuned Xu-Randall scheme predictions with respect to the CloudSat observations. Only cloudy grids are included in the calculation. Figure 9: CloudSat observed cloud fraction (CF) at 60°-79°N by the granule No. 66364 on 13 October 2018 (a and b) compared with those from the network-based scale-adaptive (NSA) (c) and the tuned Xu-Randall (d) schemes. The dark gray regions at lower levels indicate the surface terrain. In (a), the light blue indicates overcast cloud coverage (cloud profiling radar cloud mask equals 30 or 40), in which the light-gray regions are grids where the CloudSat does not have a valid observation for one or more of grid-mean properties that are required for parameterizing the sub-grid CF. Supporting Information S1) and tuned Xu-Randall schemes in all seasons and all latitudes. This indicates that the NSA scheme generally exhibits superiorities in the spatial distribution of total CF and the cloud vertical structure, and further confirms that the superiorities are not sensitive to the spatio-temporal variability of the large-scale meteorology conditions, suggesting the scheme has enough robustness to simulate CF over different climate regimes. ## 6 Summary and Discussion This study presents a neural network-based cloud-fraction parameterization scheme for use in climate models. The scheme is developed using the Cloud-Sat (quasi-)observational data, and for the first time considers explicitly the effects of both horizontal and vertical grid sizes on the cloud-fraction parameterization. It not only simulates realistically the cloud characteristics but also captures the observed non-monotonic functional relationship of CF with relative humidity. In addition, our preliminary study shows that using the NN-based scheme in the WRF Model hardly increases the computation time as compared with using the Xu-Randall scheme. The utility of the scheme in GCMs shows promising features in accuracy, scale adaptivity, and computational efficiency. In the case of comparing with the Xu-Randall parameterization, the NN-based scheme simulates more-realistic total CF spatial distribution and cloud vertical structure. Particularly, the biases of too-large high-level CF over the tropics and too-small low-level CF over regions around 60\({}^{\circ}\)S and 60\({}^{\circ}\)N in the original Xu-Randall scheme, which agrees with the too-strong LWCRE and too-weak SWCRE over the respective regions as shown in current GCMs (G. Chen et al., 2022; Flato et al., 2013; Schuddeboom and McDonald, 2021), are much eased. We are aware that the grid-mean atmospheric properties in GCMs could differ markedly from the CloudSat data (e.g., Jiang et al., 2012), hindering the above findings from being fully justified in GCMs. Hence, one of our undergoing studies is to further evaluate the NSA scheme using the ERA-Interim reanalysis (Dee et al., 2011), where the grid-mean relative humidity is assimilated with observations while the cloud condensates are fully model simulated. The preliminary results show that although cloud condensate mixing ratios in the reanalysis are much smaller than the CloudSat observations, using the NSA scheme still yields better mid- and high-level cloud fractions than the original Xu-Randall scheme (figure not shown). Therefore, it is believed that incorporating the NN-based scheme has the potential to improve the cloud radiative effects and the energy budget in GCMs. One important aspect of CF is the vertical overlap, which has significant implications to cloud radiative effects (e.g., Liang and Wang, 1997; X. Wang et al., 2021; H. Zhang and Jing, 2016). As shown by F. Zhang et al. (2013), the inter-GCM spreads in cloud radiative effects can be largely attributed to the different treatments of cloud vertical overlap. In our scheme, this aspect, at least in the sub-grid scale, is implicitly considered, which reduces the simulation biases in CF (shown in Figures 7 and 8). Therefore, further investigation of the grid-scale cloud vertical overlap using similar approaches is plausible and warranted. Figure 11: 2006-2019 mean global distribution of total cloud fraction (CF) (assuming maximum-random overlap) from the CloudSat observation, the network-based scale-adaptive (NSA) scheme, and the tuned Xu-Randall scheme. Numbers at the upper right corners of (b) and (c) indicate the correlation coefficient (\(r\); before the (slash) and root-mean square error (after the slash) of the respective scheme predictions with respect to the CloudSat observations. The global-mean total CF is 0.46, 0.43, and 0.52 for the CloudSat observations, the NSA, and the tuned Xu-Randall scheme predictions, respectively. Figure 12: 2006–2019 averaged zonal-mean vertical distribution of cloud fraction from the CloudSat observations, the network-based scale-adaptive (NSA), and the tuned Xu-Randall scheme predictions. Numbers in (b) and (c) indicate the correlation coefficient (\(r\)) and root-mean square error of the respective scheme predictions with respect to the CloudSat observations. Dashed lines indicate the heights of 440 and 680 hPa, respectively. Figure 13: Annual variation of cloud vertical distribution averaged at different latitudes from the CloudSat observations (left), the network-based scale-adaptive (NSA) (middle), and the tuned Xu-Randall (right) scheme predictions. Numbers in the middle and right columns indicate the correlation coefficient (\(r\)) and root-mean square error (RMSE) of the respective scheme predictions with respect to the CloudSat observations. Dashed lines indicate the heights of 440 and 680 hPa, respectively. ## Appendix A Calculation of Root-Mean Square Error and Correlation Coefficient The root-mean square error (RMSE) and correlation coefficient (\(r\)) of a prediction with respect to the observation are calculated as follows: \[\text{RMSE}=\sqrt{\sum_{i=1}^{M}\left(\text{pred}_{i}-\text{obs}_{i}\right)^{2}/ M} \tag{2}\] \[r=\frac{\sum_{i=1}^{M}\left(\text{pred}_{i}-\overline{\text{pred}}\right) \left(\text{obs}_{i}-\overline{\text{obs}}\right)}{\sqrt{\sum_{i=1}^{M}\left( \text{pred}_{i}-\overline{\text{pred}}\right)^{2}}\sqrt{\sum_{i=1}^{M}\left( \text{obs}_{i}-\overline{\text{obs}}\right)^{2}}} \tag{3}\] Therein, the pred and obs stand for the prediction and observation datasets, respectively, \(i\) for the sample index, and \(M\) for the sample amounts. ## Data Availability Statement The neural network was implemented using the Pytorch application programming interface (Paszke et al., 2019). The codes for upscaling CloudSat data, scheme training, and result analysis together with the databases for training NSA and Nx10020 schemes are preserved in G. Chen and Wang (2023).
2301.05219
Why is the State of Neural Network Pruning so Confusing? On the Fairness, Comparison Setup, and Trainability in Network Pruning
The state of neural network pruning has been noticed to be unclear and even confusing for a while, largely due to "a lack of standardized benchmarks and metrics" [3]. To standardize benchmarks, first, we need to answer: what kind of comparison setup is considered fair? This basic yet crucial question has barely been clarified in the community, unfortunately. Meanwhile, we observe several papers have used (severely) sub-optimal hyper-parameters in pruning experiments, while the reason behind them is also elusive. These sub-optimal hyper-parameters further exacerbate the distorted benchmarks, rendering the state of neural network pruning even more obscure. Two mysteries in pruning represent such a confusing status: the performance-boosting effect of a larger finetuning learning rate, and the no-value argument of inheriting pretrained weights in filter pruning. In this work, we attempt to explain the confusing state of network pruning by demystifying the two mysteries. Specifically, (1) we first clarify the fairness principle in pruning experiments and summarize the widely-used comparison setups; (2) then we unveil the two pruning mysteries and point out the central role of network trainability, which has not been well recognized so far; (3) finally, we conclude the paper and give some concrete suggestions regarding how to calibrate the pruning benchmarks in the future. Code: https://github.com/mingsun-tse/why-the-state-of-pruning-so-confusing.
Huan Wang, Can Qin, Yue Bai, Yun Fu
2023-01-12T18:58:33Z
http://arxiv.org/abs/2301.05219v2
Why is the State of Neural Network Pruning _so Confusing_? On the Fairness, Comparison Setup, and Trainability in Network Pruning ###### Abstract The state of neural network pruning has been noticed to be unclear and even confusing for a while, largely due to "a lack of standardized benchmarks and metrics" [3]. To standardize benchmarks, first, we need to answer: **what kind of comparison setup is considered fair**?_ This basic yet crucial question has barely been clarified in the community, unfortunately. Meanwhile, we observe several papers have used (severely) sub-optimal hyper-parameters in pruning experiments, while the reason behind them is also elusive. These sub-optimal hyper-parameters further exacerbate the distorted benchmarks, rendering the state of neural network pruning even more obscure. Two mysteries in pruning represent such a confusing status: the performance-boosting effect of a larger finetuning learning rate, and the no-value argument of inheriting pre-trained weights in filter pruning. In this work, we attempt to explain the confusing state of network pruning by demystifying the two mysteries. Specifically, (1) we first clarify the fairness principle in pruning experiments and summarize the widely-used comparison setups; (2) then we unveil the two pruning mysteries and point out the central role of **network trainability**, which has not been well recognized so far; (3) finally, we conclude the paper and give some concrete suggestions regarding how to calibrate the pruning benchmarks in the future. Code: [https://github.com/ningsun-tse/why-the-state-of-pruning-so-confusing](https://github.com/ningsun-tse/why-the-state-of-pruning-so-confusing). ## 1 Introduction The past decade has witnessed the great success of deep learning, empowered by deep neural networks [39, 70]. The success comes at the cost of over-parameterization [5, 12, 27, 32, 33, 37, 63, 64, 68, 71, 77], causing prohibitive model footprint, slow inference/training speed, and rapid-growing energy consumption. Neural network pruning, an old model parameter reduction technique that used to be employed for improving generalization [66, 2, 6], now is mostly used for model size compression or/and speed acceleration [7, 8, 11, 21, 75, 31]. The prevailing pipeline of pruning comprises three steps: 1) **pretraining**: train a dense model; 2) **pruning**: prune the dense model based on certain rules; 3) **finetuning**: retrain the pruned model to regain performance. Most existing research focuses on the second step, seeking better criteria to remove unimportant weights so as to incur as less performance degradation as possible. This 3-step pipeline has been practiced for more than 30 years [2, 6, 40, 58] and is still extensively adopted in today's pruning methods [31, 75]. Despite the long history of network pruning, recent progress seems a little bit unclear. As we quote from the \begin{table} \begin{tabular}{l c c} \hline \hline Method & Pruned acc. (\%) & Speedup \\ \hline SFP [29],JCAMT18 & 74.61 & 1.72\(\times\) \\ DCP [90],JCAMT18 & 74.95 & 2.25\(\times\) \\ GAL-0.5 [48],CWP19 & 71.95 & 1.76\(\times\) \\ Taylor-FO [56],CWP19 & 74.50 & 1.82\(\times\) \\ CCP-AC [62],JCAMT19 & 75.32 & 2.18\(\times\) \\ ProvableFP [46],JCAMT20 & 75.21 & 1.43\(\times\) \\ HRank [47],CWP20 & 74.98 & 1.78\(\times\) \\ GReg-1 [81],JCAMT21 & 75.16 & 2.31\(\times\) \\ GReg-2 [81],JCAMT21 & 75.36 & 2.31\(\times\) \\ CC [45],CWP21 & **75.59** & 2.12\(\times\) \\ \(L_{1}\)-norm [43],JCAMT17 (**our relimpl.**) & 75.24 & **2.31\(\times\)** \\ GAL-1 [48],CWP19 & 69.88 & 2.59\(\times\) \\ Factorized [44],CWP21 & 74.55 & 2.33\(\times\) \\ LFPC [28],CWP20 & 74.46 & 2.55\(\times\) \\ GReg-1 [81],JCAMT21 & 74.85 & 2.56\(\times\) \\ GReg-2 [81],JCAMT21 & **74.93** & 2.56\(\times\) \\ CC [45],CWP21 & 74.54 & **2.68\(\times\)** \\ \(L_{1}\)-norm [43],JCAMT17 (**our relimpl.**) & 74.77 & 2.56\(\times\) \\ \hline \hline \end{tabular} \end{table} Table 1: Top-1 accuracy (%) benchmark of filter pruning with **ResNet50**[27] on **ImageNet**[10]. Simply by using a better fine-tuning LR schedule, we manage to revive a _6-year-ago_ baseline filter pruning method, \(L_{1}\)-_norm pruning_[43], making it _match or beat_ many filter pruning papers published in recent top-tier venues. Note, we achieve this simply by using the common step-decay LR schedule, 90-epoch finetuning, and standard data augmentation, _no_ any advanced training recipe (like cosine annealing LR) used. This paper studies the reasons and lessons behind this pretty confounding benchmark situation in filter pruning. abstract of a recent paper [3], _"After aggregating results across 81 papers and pruning hundreds of models in controlled conditions, our clearest finding is that the community suffers from a lack of standardized benchmarks and metrics. This deficiency is substantial enough that it is hard to compare pruning techniques to one another or determine how much progress the field has made over the past three decades"_. We also sense this kind of confusion. Particularly, two mysteries in the area represent such confusion: * **Mystery 1 (M1): The performance-boosting effect of a larger finetuning learning rate (LR)**. It _was_ broadly believed that finetuning the pruned model should use a _small_ LR, _e.g._, in the famous \(L_{1}\)-norm pruning [43], finetuning LR 0.001 is used for the ImageNet experiments. However, many other papers choose a larger LR, _e.g._, 0.01, which delivers significantly better performance than 0.001. For a pretty long time, pruning papers do not officially investigate this critical performance-boosting effect of a larger finetuning LR (although they may have _already used_ it in their experiments; see Tab. 3), until this paper [38]. [38] formally studies how different finetuning LR schedules affect the final performance. They find random pruning and magnitude-pruning (the two most basic pruning methods) armed with a good finetuning LR schedule (CLR, or Cyclic Learning Rate Restarting) can counter-intuitively rival or even surpass other more sophisticated pruning algorithms. Unfortunately, they do not give explanations for this phenomenon except for calling upon comparing pruning algorithms in the same retraining configurations. * **Mystery 2 (M2): The value of network pruning**. The value of network pruning seems unquestionable given the development history of over 30 years. However, two papers [9, 51] empirically find that training the pruned model from scratch can match pruning a pre-trained model, thus radically challenging the necessity of pretraining a big model first in the conventional 3-step pruning pipeline. Tab. 1 shows a concrete example that M1 makes the pruning benchmarks unclear. After using an improved finetuning LR schedule (see Tab. 3), we make \(L_{1}\)-norm pruning [43], which is broadly considered the _most basic baseline_ approach, _match or beat_ many top-performing pruning methods published in top-tier conferences in the past several years. Such a situation really bewilders us, especially for those not in this area trying to borrow the most advanced pruning methods in their projects - _"Has the area really developed in the past several years?"_ they might question. This paper is meant to unveil these mysteries. Over this process, we hope to offer some helpful thoughts about why we have run into the current chaotic benchmark situation and how we can avoid it in the future. Specifically, when we try to unveil the mysteries, only to find there are various comparison setups enforced in the area, many of the conclusions actually _hinge on which comparison setup is used_. To decide which comparison setup is more trustworthy, we have to be clear with the _fairness principle_ in pruning experiments first (_i.e._, what kind of comparison setup is considered fair?). After we sort out the fairness principle and comparison setups, we empirically examine the two mysteries and find M2 reduces to M1. When examining M1, we find the role of _network trainability_ in network pruning, through which we can easily explain M1. Therefore, our investigation path and the contributions in this paper can be summarized as follows. * We first clarify the fairness principle in pruning experiments and summarize the outstanding comparison setups (Sec. 3), which have been unclear in the literature for a long time. * Then, we start to unveil M1[38] and M2[51]. As we shall show (Sec. 4), the conclusion of M2 actually varies up to which comparisons setup is used: If a larger finetuning LR is allowed to be used, the no-value-of-pruning argument cannot hold; otherwise, it mostly holds. Thus, to unveil M2, we have to unveil M1 first. * to our best knowledge, we are the _first_ to clarify this mystery in the area. * Finally, we summarize the major reasons that have led to the confusing benchmark situation of network pruning up to now, and give some concrete suggestions about how to avoid it in the future (Sec. 6). ## 2 Prerequisites ### Taxonomy of Pruning and Related Work A pruning algorithm _has and only has_ five exclusive aspects to be specified: (1) Base model (when to prune): is pruning conducted on a pretrained model or a random model? (2) Sparsity granularity (what to prune): what is the smallest weight group in pruning? (3) Pruning ratio (how many to prune): how many weights are to be pruned? (4) Pruning criterion (by what to prune): what measure is used to select the important weights (_i.e._, those to be kept) _vs._ the unimportant weights (_i.e._, those to be pruned)? (5) Pruning schedule (how to schedule): How is the sparsity scheduled over the pruning process? These five mutually orthogonal questions can shatter most (if not all) pruning papers in the literature. Researchers typically use (1), (2), and (4) to classify different pruning methods. We give the major backgrounds in these three axes. **Pruning after training _vs._ pruning at initialization**. Pruning has been mostly conducted on a _pretrained_ model over the past 30 years, which is thus called _pruning after training_ or _post-training pruning_. This fashion has been the unquestioned norm until (at least) two papers in 2019, SNIP [42] and LTH [18]. They argue pruning can be conducted on _randomly initialized_ models and can achieve promising performance (allegedly matching the dense counterpart), too. This new fashion of pruning is called _pruning at initialization_ (PaI). Existing PaI approaches mainly include [20, 41, 42, 65, 79] and the series of lottery ticket hypothesis [18, 19]. PaI is not very relevant to this work because the benchmarking chaos and the mysteries are mostly discussed in the PaT context, so here we would not discuss in length the specific PaI techniques. Interested readers may refer to [82] for a comprehensive summary. **Sparsity structure: structured pruning _vs._ unstructured pruning.** If the smallest weight group in pruning is a single weight element, this kind of pruning is called unstructured (or fine-grained) pruning [23, 18, 24], because the resulting zero-weight (_i.e._, pruned-weight) locations are typically irregular (if no extra regularization is enforced). If the smallest weight group in pruning presents some structure, this kind of pruning is called structured (or structural/coarse-grained) pruning [30, 43, 54, 84]. In the area, structured pruning typically narrowly refers to filter pruning or channel pruning if not explained otherwise. Structured pruning benefits more acceleration because the regular sparsity pattern is more hardware-friendly; meanwhile, the regularity imposes more constraints on the network, so given the same sparsity level, structured pruning typically underperforms unstructured pruning. Note, the definition of "structured" sparsity is severely hardware-dependent and thus can vary as the hardware condition changes. _E.g._, the _N:M sparsity1_ pioneered by NVIDIA Ampere architecture was considered as unstructured sparsity, but since NVIDIA has launched new library support to exploit such kind of sparsity for acceleration, now it can be considered as structured sparsity (called fine-grained structured sparsity [59, 88]), too. Footnote 1: [https://developer.nvidia.com/blog/accelerating-inference-with-sparsity-using-ampere-and-tensorrt/](https://developer.nvidia.com/blog/accelerating-inference-with-sparsity-using-ampere-and-tensorrt/) This paper focuses on _filter pruning_ for now because the two aforementioned mysteries (the effect of finetuning LR and the value of network pruning) are mainly discussed in this context. **Pruning criterion: Importance-based _vs._ regularization based**. The former prunes weights based on some established importance criteria, such as magnitude (for unstructured pruning) [23, 24] or \(L_{1}\)-norm (for filter pruning) [43], saliency based on 2nd-order gradients (_e.g._, Hessian or Fisher) [76, 72, 40, 78, 25]. The latter adds a penalty term to the objective function, drives unimportant weights toward zero, then removes those with the smallest magnitude. Two notable points: (1) Even in a regularization-based pruning method, after the regularization process, the weights are still removed by a certain importance (typically magnitude). Namely, regularization-based pruning inherently embeds importance-based pruning. (2) The two paradigms are not exclusive; they can be employed simultaneously. _E.g._, [83, 13, 81] select unimportant weights by magnitude while also employing the regularization to penalize weights. Finally, for more comprehensive literature on network pruning, we refer interested readers to several surveys: an early one [66], some recent surveys of pruning alone [31, 3, 21] or pruning as a sub-topic under the general umbrella of model compression and acceleration [7, 8, 11, 75]. **Most relevant papers**. The most relevant works to this paper are * [3], which is the first to systematically report the frustrating state of network pruning benchmarks, identify some causes (such as the lack of standard benchmarks and metrics in pruning), and give concrete remedies. However, they do not go deeper and analyze why the standard benchmarks are hard to achieve. Our paper succeeds [3] and will elaborate more on this aspect and point out the central role of trainability within. * [38], which officially reports the performance-boosting effect of a larger finetuning LR and calls upon comparing pruning algorithms under the same retraining configurations. [38] is actually motivated by [67], which proposes _LR rewinding vs._ the _weight rewinding_[19] proposed for finding winning ticket on large-scale networks and datasets. [67] virtually takes advantage of the performance-boosting effect of a larger finetuning LR, yet we do not know if they were aware at that point that the performance boosting is not because of the magic rewound LR schedule but simply because of a larger finetuning LR (_i.e._, any appropriately larger LR would do, even not a rewound one), as later clarified by [38]; so we tentatively consider [38] as the _first_ work to systematically report the performance-boosting effect of a larger finetuning LR. * [51], which brings forward the argument regarding the value of network filter pruning against scratch training. We will re-evaluate the major claim (scratch training can match filter pruning) of this paper under our more strictly controlled comparison setups. ### Terminology Clarification First, we make some critical concepts clear to lay down the common ground for discussion. Although they are pretty simple concepts, misinterpreting them will twist our discussions. * _Pruning pipeline_ **vs.**_pruning method_ **vs.**_sparsifying action_. Some papers refer to _pruning_ as the whole pruning algorithm, _i.e_., the 2nd step in the pruning pipeline; while others may mean a pruning paper or the instant sparsifying action. We realize such a vague conception definition is one reason causing confusion, so we make them exact here. We use _pruning pipeline_ to mean all the three steps in a pruning paper. We can consider _pruning pipeline_ to be interchangeable with a _pruning paper_. Then, we use _pruning method = pruning algorithm_ to mean the 2nd step of the pruning pipeline. Finally, the instant pruning action (_i.e_., rezoing out weights or physically taking away weights from a network) is referred to as _sparsifying action_. To summarize, in terms of concept scope, pruning paper = pruning pipeline = pruning \(>\) pruning method = pruning algorithm \(>\) sparsifying action2. Footnote 2: The notation “pruning paper = pruning pipeline \(>\) pruning method” means, in this paper we consider _pruning paper_ interchangeable with _pruning pipeline_, which includes _pruning method_ as one part. * _Training from scratch_. _Training from scratch_ = _scratch training_, means to train a randomly initialized model to convergence. Scratch training of a _pruned model_ means, we already know the network architecture of the pruned model; the weights are randomly initialized; train this network from scratch using _the same_ training recipe as training the dense model. Notably, for filter pruning, when the architecture of the pruned model is known, the model should be implemented as a small-dense model, _not a large-sparse model_ (with structural masks). The reason is, the widely-used parameter initialization schemes (_e.g_., He initialization [26], the default initialization scheme for CONV and Linear layers in PyTorch [61]) depend on the parameter shape. The large-sparse implementation is _not_ equivalent to (often underperforms) the small-dense implementation. For unstructured pruning, the standard implementation scheme is large-sparse weights (with unstructured masks) [51, 81]. * _Finetuning_. After the sparsifying action action, the subsequent training process is called _finetuning_ or _retraining_. We notice the community seems to have different interpretations about these two terms. _E.g_., in [38], finetuning is a _sub-concept_ of retraining, specifically meaning retraining with the last (smallest) learning rate of original training3; while many Figure 1: Overview of this paper. We are motivated by unveiling two mysteries (M1, M2) in filter pruning, which represent the confusing pruning benchmark situation. We first clarify the _fairness principle_ and summarize outstanding _comparison setups_ to lay down the discussion foundation (the notation “S1 \(<\) S2” means S1 is _less strict_ than S2; others can be inferred likewise). Then we start to unveil M1 and M2. M2 will be shown to reduce to M1, actually. To unveil M1, we introduce _network trainability_ as an effective perspective to demystify M1. The unawareness of the role of network trainability in pruning has actually led to several sub-optimal hyper-parameter settings, which exacerbates the chaotic benchmark status. We finally give some concrete suggestions to calibrate the pruning benchmarks. more papers [51, 57, 78, 84, 81] consider finetuning _the same as_ retraining, meaning the 3rd step of the pruning pipeline. In this paper, we take the more common stance: considering finetuning and retraining interchangeable, and in the end of this paper, we will show the term "finetuning" should be deprecated in favor of "retraining". * _Scratch-E, Scratch-B_. These two terms are from [51], denoting two scratch training schemes. "E" is short for epochs, "B" short for (computation) budget. In practice, [51] uses FLOPs as an approximation for the computation budget. In Scratch-E, the point is to maintain the same _total epochs_ when comparing scratch training to pruning. In Scratch-B, the point is to maintain the same _total FLOPs_ when comparing scratch training to pruning. Here is a concrete example of Scratch-B: a dense model has FLOPs \(F_{1}\); pretraining the dense model takes \(K_{1}\) epochs; it is pruned by \(L_{1}\)-norm pruning, giving a pruned model with FLOPs \(F_{2}\); the finetuning takes another \(K_{2}\) epochs; then the scratch training should take \((K_{1}F_{1}+K_{2}F_{2})/F_{2}=K_{1}(F_{1}/F_{2})+K_{2}\) epochs. The ratio \(F_{1}/F_{2}\) is typically called _speedup_ in network pruning. * _Value of network pruning_. This term comes from [51]. The conclusion of [51] is that scratch training can match the performance of the 3-step pruning pipeline if Scratch-B is adopted, for filter pruning. Therefore, they argue there is no value for filter pruning algorithms that _use predefined layerwise pruning ratios_. For the filter pruning algorithms that do not use predefined layerwise pruning ratios, their role is to decide the favorable network architectures, akin to NAS [91, 15]. As for unstructured pruning, [43] shows scratch training _cannot_ match pruning. Therefore, rigorously, the argument about the value of network pruning means the value of inheriting pretrained weights in filter pruning with predefined layerwise pruning ratios. We will use the short notion, _value of network pruning_, without mentioning its much richer context. Footnote 3: [https://github.com](https://github.com) base model reported by ThiNet [53] has top-5 accuracy 88.44% while CP [30], a concurrent work with ThiNet, reported 89.9% (for those who are not familiar with these numbers, 1.5% top-5 accuracy is a _very significant_ gap for ImageNet-1K classification). As a remedy, to make the results comparable, many papers report the relative _performance drop_, namely, base model accuracy minus final model accuracy. Such an idea is still broadly practiced at present [51, 81], _esp._ when comparing methods that are implemented under quite different conditions. S2. Later, as the DL community develops, more DL platforms _e.g._, PyTorch [61] and TensorFlow [1] mature. There is usually a well-accepted model zoo (such as torchvision models5) for others to use. As a result, more and more pruning papers adopt them as the base models, such as [43, 81], which has become the mainstream practice at present. Thus, the S2 comparison setup arises. Footnote 5: [https://pytorch.org/vision/stable/models.html](https://pytorch.org/vision/stable/models.html) At this stage, few researchers have noticed the importance of finetuning. This makes sense since, in the pruning pipeline, only the pruning method part (_i.e._, the 2nd step) is regarded as the central one. The finetuning process is often so _downplayed_ that many papers do not even clearly report the hyper-parameters, as also noted by [3]. S3. Later, in the endless pursuit of higher and higher performance, there is a clear trend that the finetuning epochs become longer and longer (see Tab. 3 for an incomplete summary). This in effect renders the comparison more and more unfair. Besides, the finetuning LR has been noticed to have a significant impact on the final performance, as formally studied by [38] (although [38] is the first one to formally study this phenomenon, a larger finetuning LR has been employed by many papers even before). Because of these, the finetuning process must be taken into account to maintain fairness. The most exact way to rule out the impact of finetuning is to use exactly the same finetuning process - the same LR schedule (including the same epochs; we omit the hyper-parameters, like weight decay, momentum, _etc._, and assume they are maintained the same), _i.e._, the S3.2 in Tab. 2. However, due to various objective or subjective reasons (_e.g._, prior papers may not release their finetuning details, making the follow-ups unable to reproduce the same finetuning), S3.2 is often impractical. Thence comes a weaker setup S3.1, which only keeps the same epochs of finetuning. It is allowed to use different finetuning LR schedules (_e.g._, different initial LR) - this is where M1, the mystery of the finetuning LR effect, arises_. Several papers (such as [80, 81, 86, 87]) have ablative analysis experiments on small-scale datasets (_e.g._, CIFAR10 [36]) using the setup S3.2, while the main benchmark experiments (_e.g._, with ResNet-50 on ImageNet) using S2. The primary reason is that, following the setup S3.2 means re-running the experiments for other comparison methods in the large-scale benchmarks, which is usually impractical (too costly) or even impossible (_e.g._, the comparison methods do not release usable code). S4. The S3.2 is still not the most strictly fair setup since it does not consider the cost (measured by the number of epochs) of the pruning method. For one-shot pruning (such as \(L_{1}\)-norm pruning [43]), the cost of pruning is zero; while for a regularization-based method (such as GReg [81]), it may take another few epochs for regularized training. Considering these cases, S4.2 comes out: it builds upon S3.2 and demands the same LR schedule for the pruning algorithm - _as far as we know, this is the most strict comparison setup_. In practice, again, for various reasons, we may not know the LR schedule of a pruning \begin{table} \begin{tabular}{l l} \hline \hline **No.** & **Comparison setups** \\ \hline S1 & Compare performance or performance drop on the same \\ & dataset and network at the same compression or speedup rate \\ \hline S2 & +Same base model \\ S3.1 & +Same fineetuning epochs \\ \hline S3.2 & +Same base model \\ & +Same fineetuning LR schedule \\ \hline S4.1 & +Same fineetuning LR schedule \\ & +Same fineetuning epochs \\ & +Same base model \\ & +Same base model \\ \hline S4.2 & +Same fineetuning LR schedule \\ & +Same fineetuning LR schedule \\ \hline S4.2 & +Same epochs of “pretraining + pruning + finetuning” \\ & +Same fineetuning LR schedule \\ \hline SX-A & +Same epochs of “pretraining + pruning + finetuning” \\ SX-B & +Same FLOPs of “pretraining + pruning + finetuning” \\ \hline \hline \end{tabular} \end{table} Table 2: Summary of popular comparison setups in pruning papers. It is helpful to review them along with the 3-step pruning pipeline: pretraining (output: base model) \(\Rightarrow\) pruning (output: pruned model) \(\Rightarrow\) finetuning (output: final small model). In terms of strictness, S1 \(<\) S2 \(<\) S3.1 \(<\) S3.2 \(<\) S4.1 \(<\) S4.2 (the notation “S1 \(<\) S2” means S1 is _less strict_ than S2; others can be inferred likewise). Most existing pruning papers follow the S2 comparison setup. \begin{table} \begin{tabular}{l c c} \hline \hline Method & \#Epochs & LR schedule \\ \hline SSL [84]\({}_{\ algorithm. Then, S4.2 degrades to S4.1, which only demands the same epochs. **SX.** In setups S2 to S4.2, when comparing pruning to scratch training in obtaining the same pruned (small) model, the scratch training employs the same training recipe of obtaining the base (big) model. [51] challenges this practice. They argue, the scratch training scheme spends less cost than pruning, so the comparison is unfair. As a remedy, they propose to take into account the cost of the _pretraining_ stage, which gives the SX-A and SX-B setups. About the _cost_, one way to measure it is to use the number epochs (hence the SX-A); another is to consider the same computation and they approximate computation with FLOPs (hence the SX-B). It is hard to say if considering the cost of the pretraining stage is really necessary and practical. Advocates of the older practice may list reasons, _e.g_., pretrained models often exist already (like those pretrained on ImageNet [10] and shared on HuggingFace6), so we do not need to consider the cost of pretraining. However, advocates of SX may argue that not all pretrained models are available; for many tasks, we still need to train the pretrained models first, so the cost of scratch training should be considered. Footnote 6: [https://huggingface.co/](https://huggingface.co/) We have no inclination here regarding which one is more correct. We make two points that we are fairly certain about: (1) In the pruning literature, most papers still follow the _older_ practice when reporting the scratch training results of the pruned model. (2) Given the recent rise of foundation models [4] (_e.g_., Bert [12], GPT-3 [5], CLIP [63], diffusion models [68, 73]), common researchers barely have the resources to train a model from scratch, so pruning would inevitably be conducted on the pretrained model, probably, for those big models. **What comparison setup is mostly used now?** Unfortunately, S2 is the most prevailing comparison setup at present [3]. This setup ignores at least one important factor that, we now know [38], has a significant impact on the final performance: the finetuning LR schedule. In the following sections, we start our empirical investigation of unveiling M1 and M2. We study M2 first and then M1, because the conclusion about M2 actually depends on M1, as we are about to show. ## 4 Reexamining the Value of Pruning The rethinking paper [51] presents many valuable thoughts regarding the value of the 3-step pruning pipeline against scratch training. However, there are a few potential concerns in their experiments that may shake the validity of their conclusion. _First_, they directly cite the results of a few pruning papers and compare the relative performance drop. Because of the stark differences between different DL platforms, such a comparison (_e.g_., comparing methods that use different base models) may not be convincing enough for rigorous analysis. _Second_, when reproducing the \(L_{1}\)-norm pruning [43], they use fixed LR 0.001 and 20 epochs, following [43], for the finetuning stage, which is now known to be severely sub-optimal (see Tab. 4, a larger finetuning LR 0.01 can significantly boost performance). It is thereby of interest whether the no-value-of-pruning argument would change if the comparison is conducted under a strictly controlled condition and a better finetuning LR is employed. This section attempts to answer this question. Three comparison setups (SX-A, SX-B, and S4.2) are considered since they are the _most strict_ setups up to date. **Pruning method**. We choose \(L_{1}\)_-norm pruning_[43] because it is the most representative pruning method and easy to control at a strict comparison setup. Specifically, \(L_{1}\)-norm pruning prunes the filters of a pretrained model with the smallest \(L_{1}\)-norms to a predefined pruning ratio. After pruning, the pruned model is finetuned for a few epochs to regain performance. Other pruning methods, such as regularization-based methods (_e.g_., [81, 50, 84]), introduce many factors that are hard to control for rigorous analysis, so we do not adopt them here. We will discuss how the findings can generalize to those cases later. **Networks and datasets**. The network used for analysis is ResNet34 [27], following [43]. For standard benchmarks (_e.g_., Tab. 1), we use ResNet50 [27] because it is one of the most representative benchmark networks in filter pruning. The datasets are ImageNet100 and the full ImageNet [10]. ImageNet100 is a randomly drawn 100-class subset of ImageNet. We use it for _faster_ analysis given our limited resource budget. The full ImageNet is used for benchmarks. **Implementation details of pruning**. For analysis, pruning is conducted on the 1st CONV layer (the 2nd CONVs are not pruned, following \(L_{1}\)-norm pruning [43]) in all residual blocks of ResNet34. The first CONV and all FC layers are spared, also following the common practice [81, 89, 21]. Uniform layerwise pruning ratio is employed (which usually under-performs a tuned non-uniform layerwise pruning ratio scheme; but since this paper does not target the best performance but explanation, we adopt it for easy analysis). We conduct pruning at a wide sparsity spectrum (10% to 95%) in the hopes of comprehensive coverage. **One table to show them all**. The results are presented in Tab. 4. Before we present the analyses, we introduce a notion, _pruning epoch_, which is defined as the epoch when the sparsifying action is physically enforced. _E.g_., if a model is trained for 30 epochs and then the sparsifying action is enforced, the pruning epoch is 30. We observe: **(1)** For the S4.2 setup (rows marked by ), we are not allowed to change the LR schedule. The only thing we can change is the pruning epoch. As seen, the best pruning epoch varies _w.r.t._ the sparsity level - at a small pruning ratio, different pruning epochs give a similar performance; while as the pruning ratio arises, the performance becomes more sensitive to the pruning epoch, _e.g._, for pruning ratio 95%, P90F30, le-4 severely underperforms P30F90, le-2. Notably, a clear trend is, when the pruning ratio is large (70% to 95%), it is better to have a smaller pruning epoch. Under this setup, only at pruning ratios of 30%-70%, pruning surpasses scratch training by a statistically significant gap. Therefore, we can only say pruning has a marginal advantage over scratch training here. **(2)** Then we look at the setup SX-A (rows marked by ). Under this setup, we are allowed to adjust the finetuning LR as long as the total epochs are kept the same. We increase the initial finetuning LR. As seen, it significantly improves the accuracies, _e.g._, (P30F90, le-1) improves the accuracy by nearly 2% at pruning ratio 95%, against (P30F90, le-2). This is the performance-boosting effect aforementioned [38]. We also apply the larger LR trick to another two settings P60F60 and P90F30. In all of them, we see a larger finetuning LR improves performance by an obvious margin. Now, the gap between pruning and scratch training becomes much more significant. Pruning is more surely valuable under this setup. **(3)** Next, we use the comparison setup SX-B (rows marked by ), which maintains the total FLOPs. We apply this scheme to the best pruning setup P30F90, le-1 in S4.1 in the hopes of better performances. Since the dense model is trained for 30 epochs, to compensate for the FLOPs, the pruning epoch should be squeezed by the speedup ratio \(k\). _E.g._, for pruning ratio 10%, the speedup ratio is 1.11, then the pruning epoch should be adjusted to \(30/k\approx 27\). As seen, the squeezing of the pruning epoch does close the gap between pruning and scratch training: At pruning ratios of 10% to 70%, pruning is still better; while for 90% and 95%, pruning only matches or underperforms scratch training - this is a concrete example that we do _not_ have a once-for-all answer to questions like "_is pruning better than scratch training?_" We also try a smaller finetuning LR in this setup, as shown in the row (P30/\(k\)F90, le-2). The LR effect also translates to this case - a smaller finetuning LR degrades the performance. **Short summary**. As seen, the argument about the value of network pruning severely hinges on which comparison setup is employed and the pruning ratio. For the setup SX-A, where pruning outperforms scratch training obviously, the advantage comes from a better finetuning LR. Yet, we are not sure if such better LR schedules also exist for the scratch training; if so, scratch training can be further boosted, too - as such, this kind of "competition" can be _endless_. There are two kinds of attitudes toward this situation: _(1)_ Do not consider the performance improvement from a better finetuning LR as a fair/valid performance advantage as it is _not_ from the pruning algorithm. _(2)_ Still consider it as a valid performance advantage but will meet the "endless competition" challenge we just mentioned. The community now is mostly using _(2)_. We suggest using _(1)_, following our fairness definition clarified in Sec. 3. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline Pruning ratio & 10\% & 30\% & 50\% & 70\% & 90\% & 95\% \\ FLOPs (G, speedup: \(k\times\)) & 3.30 (1.11\(\times\)) & 2.59 (1.41\(\times\)) & 1.90 (1.93\(\times\)) & 1.19 (3.09\(\times\)) & 0.48 (7.68\(\times\)) & 0.30 (12.06\(\times\)) \\ \hline Scratch training & 83.68\({}_{\pm 0.38}\) & 83.31\({}_{\pm 0.13}\) & 82.90\({}_{\pm 0.16}\) & 82.45\({}_{\pm 0.13}\) & 79.37\({}_{\pm 0.76}\) & 76.67\({}_{\pm 0.90}\) \\ \hline & 83.95\({}_{\pm 0.17}\) & **84.01\({}_{\pm 0.23}\)** & **83.87\({}_{\pm 0.44}\)** & **82.93\({}_{\pm 0.10}\)** & 79.86\({}_{\pm 0.11}\) & 77.41\({}_{\pm 0.11}\) \\ & 83.88\({}_{\pm 0.07}\) & 84.00\({}_{\pm 0.22}\) & 83.29\({}_{\pm 0.14}\) & 82.61\({}_{\pm 0.07}\) & **80.41\({}_{\pm 0.32}\)** & **77.64\({}_{\pm 0.39}\)** \\ & 83.56\({}_{\pm 0.03}\) & 83.95\({}_{\pm 0.14}\) & 83.28\({}_{\pm 0.08}\) & 82.47\({}_{\pm 0.12}\) & 79.88\({}_{\pm 0.10}\) & 76.17\({}_{\pm 0.21}\) \\ & 84.21\({}_{\pm 0.07}\) & 83.87\({}_{\pm 0.09}\) & 82.90\({}_{\pm 0.10}\) & 81.24\({}_{\pm 0.17}\) & 77.29\({}_{\pm 0.05}\) & 70.53\({}_{\pm 0.37}\) \\ & 84.24\({}_{\pm 0.04}\) & 83.47\({}_{\pm 0.12}\) & 82.45\({}_{\pm 0.14}\) & 80.81\({}_{\pm 0.09}\) & 73.94\({}_{\pm 0.24}\) & 64.98\({}_{\pm 0.31}\) \\ & 84.09\({}_{\pm 0.07}\) & 82.47\({}_{\pm 0.02}\) & 79.70\({}_{\pm 0.00}\) & 74.87\({}_{\pm 0.19}\) & 49.23\({}_{\pm 0.21}\) & 29.89\({}_{\pm 0.26}\) \\ & 85.27\({}_{\pm 0.13}\) & **85.37\({}_{\pm 0.19}\)** & **85.48\({}_{\pm 0.18}\)** & **83.83\({}_{\pm 0.17}\)** & **81.56\({}_{\pm 0.29}\)** & **79.57\({}_{\pm 0.15}\)** \\ & 83.72\({}_{\pm 0.14}\) & 83.88\({}_{\pm 0.07}\) & 83.67\({}_{\pm 0.11}\) & 82.96\({}_{\pm 0.23}\) & 80.78\({}_{\pm 0.23}\) & 77.81\({}_{\pm 0.25}\) \\ & 83.91\({}_{\pm 0.08}\) & 84.02\({}_{\pm 0.20}\) & 83.41\({}_{\pm 0.15}\) & 82.91\({}_{\pm 0.12}\) & 79.43\({}_{\pm 0.07}\) & 75.20\({}_{\pm 0.23}\) \\ & 83.41\({}_{\pm 0.09}\) & 84.02\({}_{\pm 0.20}\) & 84.85\({}_{\pm 0.31}\) & **83.64\({}_{\pm 0.09}\)** & **79.65\({}_{\pm 0.31}\)** & **75.79\({}_{\pm 0.28}\)** \\ & 83.40\({}_{\pm 0.04}\) & 82.69\({}_{\pm 0.27}\) & 82.16\({}_{\pm 0.03}\) & 79.97\({}_{\pm 0.16}\) & 74.76\({}_{\pm 0.24}\) & 70.61\({}_{\pm 0.52}\) \\ \hline \hline \end{tabular} * Under comparison setup S4.2 (same overall LR schedule), Under comparison setup SX-A (same total epochs; finetuning LR increased), Under comparison setup SX-B (same total FLOPs). \end{table} Table 4: Top-1 accuracy (%) comparison between \(L_{1}\)-norm pruning [43] and training from scratch with **ResNet34** on **ImageNet100**. Each result is averaged by at least three random runs. The learning rate (LR) schedule of scratch training is: Initial LR 0.1, decayed at epoch 30/60/90/105 by multiplier 0.1, total: 120 epochs (top-1 accuracy of dense ResNet34: 84.56%, FLOPs: _”P30F90, 1e-1” means the model is pruned at epoch 30 and finetuned for another 90 epochs with initial finetune LR 1e-1_ (please refer to our supplementary material for the detailed LR schedule); the others can be inferred likewise. The **best** result within each comparison setup is highlighted **hold**. Despite many uncertainties, we are certain about one thing from Tab. 4: Whichever setup is favored, the fine-tuning LR holds a critical role to performance. Even for the comparisons setup S4.2, where the finetuning LR does not change, by changing the pruning epoch, implicitly, we change the finetuning LR, and it has been shown very pertinent to the final performance as well. In this sense, the two mysteries of pruning actually boil down to one (M1): _Why does finetuning LR have such a great impact on the performance?_ This is the next question we would like to answer. LR, (arguably) as the most influential hyper-parameter in training neural networks, has a significant impact on performance - this is definitely not surprising; what is really surprising might be, why the prior pruning works (_e.g_., the original \(L_{1}\)-norm pruning [43] adopts LR 0.001 in finetuning for their ImageNet experiments) did not realize that such a simple "trick" is so important to performance? This question is also worth our thinking. ## 5 Trainability in Network Pruning ### Background of Trainability Trainability, by its name, means the ability (easiness) of training a neural network, _i.e_., the optimization speed (note, speed is not equal to quality, so we may see a network with good trainability turns out to have a bad generalization ability eventually). Notably, essentially, the role of a pruning method is to provide the initial weights for the later finetuning process, that is, pruning is essentially a kind of _initialization_. In stark contrast to the broad awareness that initialization is very critical to neural network training [26, 35, 55, 74, 22], the initialization role of pruning has received negligible research attention, however. Trainability is also mostly studied for random initialization [85, 69]. A few recent works marry it with network pruning in some other similar forms like signal propagation [41] and gradient flows [79] (a good signal propagation or strong gradient flow usually suggests a good trainability). These works are inspiring, while they mostly stay in the domain of pruning at initialization (PaI). Few attempts before, to our best knowledge, tried to utilize the notion of trainability to examine pruning after training (PaT), at least, for the two mysteries we study here. This paper is meant to bridge this gap. The major difference between PaI and PaT is whether using a pretrained model as base. Such a context is essential to this paper since the above two mysteries are both brought forward in the context of PaT. **Trainability accuracy**. Literally, a bad trainability implies the training is hard and the training performance will arise slowly. Per this idea, there is a straightforward metric to measure trainability - we introduce _trainability accuracy_, the average of the first \(N\) epochs, \[T=\frac{1}{N}\sum_{i=1}^{N}\mathrm{Acc}_{i}. \tag{1}\] Since the optimization speed depends on the LR used, when we calculate trainability accuracy, we must ensure they are under the same LR schedule. In this paper, we choose \(N\) as the number of the 1st LR stage, which characterizes the optimization speed in the early phase. Next, we utilize trainability to explain the mysterious effect of the finetuning LR. ### Examining the Effect of Finetuning LR **Two facts as foundation**. We first lay down two facts as the common ground for the discussion of this section. We will show the mystery about the finetuning LR effect boils down to these two simple facts. _First, pruning damages trainability_. This is an intuitively straightforward fact since pruning removes connections or neurons, which virtually makes the network harder to train. This fact holds for not only pruning a random network [41], but also for pruning a pretrained model here. Moreover, notably, more aggressive pruning leads to more damaged trainability. _Second, a model of worse trainability will need more effective updates to reach convergence_. More effective updates mean two cases: If LR is not changed, more epochs are needed; if the number of epochs does not change, a larger LR is needed. This is also easy to understand since trainability measures the easiness of optimization; a bad trainability implies harder optimization \begin{table} \begin{tabular}{l c c} \hline \hline Finetuning setup & Top-1 acc. (\%) & TA (\%) \\ \hline P30F90, 1e-1 & 79.57\({}_{\pm 0.15}\) & 88.00 \\ P30F90, 1e-2 & 77.64\({}_{\pm 0.39}\) & 77.45 \\ P30F90, 1e-2 (+30 epochs) & 79.12\({}_{\pm 0.19}\) & / \\ P30F90, 1e-2 (+60 epochs) & **79.59\({}_{\pm 0.25}\)** & / \\ \hline P60F60, 1e-2 & **77.81\({}_{\pm 0.25}\)** & 87.39 \\ P60F60, 1e-3 & 70.53\({}_{\pm 0.37}\) & 68.19 \\ P60F60, 1e-3 (+60 epochs) & 75.71\({}_{\pm 0.09}\) & / \\ P60F60, 1e-3 (+120 epochs) & 77.17\({}_{\pm 0.13}\) & / \\ P60F60, 1e-3 (+180 epochs) & 77.33\({}_{\pm 0.09}\) & / \\ \hline P90F30, 1e-2 & 75.20\({}_{\pm 0.23}\) & 84.83 \\ P90F30, 1e-4 & 29.89\({}_{\pm 0.26}\) & 37.93 \\ P90F30, 1e-4 (+60 epochs) & 60.69\({}_{\pm 0.17}\) & / \\ P90F30, 1e-4 (+270 epochs) & 70.78\({}_{\pm 0.16}\) & / \\ P90F30, 1e-4 (+1485 epochs) & **78.18** & / \\ \hline \hline \end{tabular} \end{table} Table 5: Top-1 accuracy (%) comparison of different setups of \(L_{1}\)-norm pruning [43] with **ResNet34** on **ImageNet100**. Pruning ratio: 95%. TA: trainability accuracy (the metric used to measure trainability; see Eq. (1)). This table shows, the performance gap between a smaller LR and a larger LR is not fundamental. It can be closed simply by training more epochs. The root cause that a smaller LR _appears_ to under-perform a larger LR is simply that the model trained by the smaller LR does _not_ fully converge. literally, hence the more effective updates. Such observation has been made by some sparse training papers, _e.g._, RigL [16] notes that "_sparse training methods benefit significantly from increased training steps_". When we observe that a larger LR improves the final test accuracy of the pruned model (_e.g._, Row P30F90, le-1 _vs_. Row P30F90, le-2 in Tab. 4), it is worthwhile to differentiate two subtle yet distinct possibilities: * A larger LR helps the pruned model reach a solution that the smaller LR _cannot_ reach, _i.e._, a larger LR help the model located into a better local minimum basin. * The smaller LR can also help the model reach the solution as the larger LR does; just, the larger LR helps the model get there faster. The former implies the performance-boosting effect of a larger LR is _fundamental_; while the latter implies there is no fundamental gap between the two solutions; it is only an issue of optimization speed. Let's analyze a concrete example of P60F60 in Tab. 5. For pruning ratio 95% (we use this for example because at larger sparsity, the performance boosting effect is most pronounced), using 1e-2 _vs_. 1e-3 improves the test accuracy from 70.53 to 77.81, a very significant jump. This improvement also translates to the rows of P30F90 and P90F30. However, in Tab. 5, we note the performance improvement coincides with trainability accuracy improvement. We were wondering if the performance improvement is actually due to a better trainability. Fig. 2(a) plots the test accuracy during the finetuning of P60F60, le-3. We notice before the 1st LR decay at epoch 30, the accuracy keeps arising even at epoch 30. This triggers a question: usually, we decay LR when the accuracy _saturates_; now, when the accuracy is still steadily rising, _is the LR decayed too early?_ This question matters because if the LR decays too early, the model is _forced_ to stabilize due to the small step size and insufficient updates, not because it gets close to the local minimum, _i.e._, the model may _not converge at all_. To verify this, we extend the epochs of the LR 0.001 phase by 60/120/180 epochs. See the results in Tab. 5 (note the rows P60F60, le-3 (+60/120/180 epochs)) and Fig. 2(b). Now, the model finetuned by LR 1e-3 can reach 77.33, very close to 77.81 reached by LR 1e-2. The test accuracy plot in Fig. 2(b) also confirms that the seeming underperformance of LR 1e-3 is due to insufficient epochs - namely, **the advantage of a larger LR 0.01 is not some magic fundamental advantage, but a simple consequence of faster optimization.** We also verify this on other cases (P30F90 and P90F30) that the smaller LR "underperforms" the larger LR. The results are also presented in Tab. 5. In _all_ of these cases, given abundant epochs, the gap between the larger LR and the smaller LR can be closed. Especially for P90F30, the smaller LR 1e-4 can achieve a much better result than LR 1e-2 (78.18 _vs_. 75.20). This strongly demonstrates the smaller LR can also achieve what the larger LR can do. To summarize, our results suggest **a larger LR does not really "improve" the performance. What really happens is, a larger LR accelerates the optimization process, making the higher performance _observed earlier_**. In practice, when researchers tune different LR's, they usually keep the total epochs fixed (for the sake of fairness). Given the same total epochs, the pruned model using the smaller finetuning LR does not fully converge, making the performance _appear_ "worse". **Further remarks.** It is worthwhile to note that such an experimenting trap is _so covert_ if we are unaware of the damaged trainability issue in pruning. We may never realize that the epochs should be increased properly if a smaller finetuning LR is used. What's even trickier, we do not know how many more epochs is the so-called _proper_ - Tab. 5 is a living example. For some cases (_e.g._, P30F90), 60 more epochs is enough, while for others (_e.g._, P60F60, P90F30), 180 epochs or more is not enough to bridge the performance gap. Clearly, there is still much work to be done here toward a more rigorous understanding of the influence of damaged Figure 2: Test accuracy _vs_. epoch during finetuning of the setting P60F60, 1e-3 at pruning ratio 95% in Tab. 5. Red vertical lines mark the epoch of decaying LR by 0.1. Particularly note before the 1st LR decay, the accuracy keeps arising in (a), implying the 1st LR decay may be too early – this is confirmed in (b), where the red cross marker (\(\times\)) indicates the time point of the 1st LR decay in (a). See more similar plots in our supplementary material. trainability on pruning. **Retrospective remarks and the lessons.** It is worthwhile to ponder why [43] employed a _seriously sub-optimal_ fine-tuning LR scheme. This, we conceive, may originate from a long-standing _misunderstanding_ in the area of network pruning - many have believed that because pruning is conducted onto a _converged_ model, the retraining of the pruned model needs not to be long and the LR should be _small_ to avoid destroying the knowledge the model has acquired, _e.g._, in [67], the authors mentioned in their abstract "_The standard retraining technique, fine-tuning, trains the unpruned weights from their final trained values using a small fixed learning rate_", implying that such misconception spreads so widely that it is taken for "standard".7 Footnote 7: Actually, the 3rd-step of the pruning pipeline is broadly referred to as _finetuning_ – this term per se already implies the inclination of using a small LR. To rule out such conceptual bias, a more accurate way to phrase the 3rd step in the pruning pipeline may be _retraining_ the pruned model. However, the results in Tab. 4 suggest, such thought only holds for the cases of low pruning ratios. For a moderate or large pruning ratio, this thought _hardly_ holds. What was neglected is that the sparsifying action damages network trainability, slowing down the optimization. As a compensation, it is supposed to use a larger LR to accelerate the optimization, not a smaller LR; similarly, more epochs are needed to compensate for the slow optimization. However, the original \(L_{1}\)-norm pruning [43] chose LR 0.001 and only 20 epochs for their ImageNet experiments, exactly the opposite of what is supposed. This, we conceive, is the reason that \(L_{1}\)-norm pruning has been underrated for a long time. Its real performance is actually pretty strong even compared with recent top-performing approaches (see Tab. 1). Similarly, based on what we just learned about the truth of M1, if we examine the other filter pruning methods, _e.g._, GAL [48] (see Tab. 3), its reported results are probably also underrated, because it uses only 30 epochs for finetuning and the model may well not expose its full potential, as a result of the immature convergence. This implies a pretty disturbing concern - for many filter pruning papers, we have to calibrate their results for fairness. Directly citing the numbers may (well) not show the real performance comparison. **How to make comparisons in a pruning paper?** As seen, different finetuning hyper-parameters can lead to very different conclusions, which do not reflect the true comparison of the pruning algorithm part. The ideal solution to this problem is to re-implement the existing pruning algorithms _under the same configuration_. This counts on the community efforts (_e.g._, ShrinkBench8[3] and Torch-Pruning9[17]) to standardize the benchmarks. Meanwhile, even with these efforts, we have to recognize that re-implementing all or most of the past pruning algorithms under the same condition is unrealistic, and sometimes unnecessary10. Footnote 8: [https://github.com/JJGO/shrinkbench](https://github.com/JJGO/shrinkbench) Footnote 9: [https://github.com/VainF/Torch-Pruning](https://github.com/VainF/Torch-Pruning) Footnote 10: _Rigidly_ demanding _exact the same_ condition in defining fairness may be of little meaning for scientific development. Think about the surge of deep learning – compared to the past SVM era, deep learning nowadays definitely takes (way) more resources than SVM, _i.e._, unfair, while few has questioned the unprecedented value of deep learning as large language models like ChatGPT [60] surprise us on a daily basis. We suggest a more practical solution here, highlighting two rules of thumb: _(1)_ cross-validating different hyper-parameters; _(2)_ better performance still weighs more when fairness is hard to figure out. For a concrete example, given two pruning algorithms A (with finetuning setup \(FT_{A}\)) and B (with finetuning setup \(FT_{B}\)), we can conduct two more experiments: A + \(FT_{B}\) and B + \(FT_{A}\); then compare A + \(FT_{A}\)_vs._B + \(FT_{A}\) and A + \(FT_{B}\)_vs._B + \(FT_{B}\). * If one method consistently wins in both cases, we are more confident that the winner is really better due to its effective pruning design. * If there is inconsistency between the two comparisons, _e.g._, A + \(FT_{A}>\) B + \(FT_{A}\), while A + \(FT_{B}<\) B + \(FT_{B}\), namely, different pruning methods favor different finetuning recipes. This means synergistic interaction exists between the pruning and finetuning stage and we cannot single out the finetuning when evaluating the pruning algorithms. In this case, we suggest that **the case delivering higher performance weighs more** as it advances the state-of-the-art. ## 6 Conclusion and Discussion This paper attempts to figure out the confounding benchmark situation in filter pruning. Two particular mysteries are explored: the performance-boosting effect of a larger finetuning LR, the no-value-of-pruning argument. We present a clear fairness principle and sort out four groups of popular comparison setups used by many pruning papers. Under a strictly controlled condition, we examine the two mysteries and find they both boil down to the issue of damaged network trainability. This issue was not well recognized by prior works, leading to (severely) sub-optimal hyper-parameters, which ultimately exacerbates the confounding benchmark situation in filter pruning now. We hope this paper helps the community towards a clearer understanding of pruning and more reliable benchmarks of it. **Takeaways and suggestions from this paper**: * _Why is the state of neural network pruning so confusing?_ Non-standard comparison setups (and its fundamental reason: unclear fairness principle) and the unawareness of the role of trainability are the two major reasons. The latter further leads to sub-optimal hyper-parameter settings, inherited by many follow-up papers, exacerbating the messy benchmark situation. - Higher comparison setup means stricter experiment control, also means more resources and efforts; so there would be inevitably a _trade-off_ between how fair we want to be and how much we can invest in. * Reporting all the finetuning details (_esp._ the LR schedule) is rather necessary and should be standardized. * Cross-validating finetuning setups is a practical suggestion to decide the true winner between two pruning algorithms with different finetuning recipes. * Filter pruning can beat scratch training or not, up to the specific comparison setup and pruning ratio in consideration. Given the recent rise of large foundation models, pruning may still follow the conventional 3-step pipeline. * the performance is not "improved"; what really happens is that the good performance is observed earlier because the larger LR accelerates the optimization. The fundamental factor playing under the hood is the network trainability damaged by the sparsifying action (or zeroing out) action in pruning. * The damaged network trainability was not well recognized by prior pruning works, resulting in severely sub-optimal hyper-parameters, rendering the potential of a baseline method, \(L_{1}\)-norm pruning [43], _underestimated_ for a long time. This fact may spur us to _reevaluate_ the actual efficacy of (so) many sophisticated pruning methods against the simple \(L_{1}\)-norm pruning. * we should firmly use _retraining_ instead.
2310.16530
Toward Practical Privacy-Preserving Convolutional Neural Networks Exploiting Fully Homomorphic Encryption
Incorporating fully homomorphic encryption (FHE) into the inference process of a convolutional neural network (CNN) draws enormous attention as a viable approach for achieving private inference (PI). FHE allows delegating the entire computation process to the server while ensuring the confidentiality of sensitive client-side data. However, practical FHE implementation of a CNN faces significant hurdles, primarily due to FHE's substantial computational and memory overhead. To address these challenges, we propose a set of optimizations, which includes GPU/ASIC acceleration, an efficient activation function, and an optimized packing scheme. We evaluate our method using the ResNet models on the CIFAR-10 and ImageNet datasets, achieving several orders of magnitude improvement compared to prior work and reducing the latency of the encrypted CNN inference to 1.4 seconds on an NVIDIA A100 GPU. We also show that the latency drops to a mere 0.03 seconds with a custom hardware design.
Jaiyoung Park, Donghwan Kim, Jongmin Kim, Sangpyo Kim, Wonkyung Jung, Jung Hee Cheon, Jung Ho Ahn
2023-10-25T10:24:35Z
http://arxiv.org/abs/2310.16530v1
# Toward Practical Privacy-Preserving Convolutional Neural Networks ###### Abstract Incorporating fully homomorphic encryption (FHE) into the inference process of a convolutional neural network (CNN) draws enormous attention as a viable approach for achieving private inference (PI). FHE allows delegating the entire computation process to the server while ensuring the confidentiality of sensitive client-side data. However, practical FHE implementation of a CNN faces significant hurdles, primarily due to FHE's substantial computational and memory overhead. To address these challenges, we propose a set of optimizations, which includes GPU/ASIC acceleration, an efficient activation function, and an optimized packing scheme. We evaluate our method using the ResNet models on the CIFAR-10 and ImageNet datasets, achieving several orders of magnitude improvement compared to prior work and reducing the latency of the encrypted CNN inference to 1.4 seconds on an NVIDIA A100 GPU. We also show that the latency drops to a mere 0.03 seconds with a custom hardware design. ## 1 Introduction Privacy regulations such as GDPR [22] have propelled data confidentiality to the forefront of concerns for cloud companies offering machine learning (ML) as a service. Fully homomorphic encryption (FHE) [7], which can evaluate arbitrary functions on encrypted data called ciphertext, has garnered attention as a promising solution with robust security assurance. However, FHE-based computation exhibits distinct characteristics from their unencrypted counterpart, and naively applying FHE to ML inference results in significant inefficiencies. These disparities pose challenges in developing an end-to-end ML framework with FHE, which has been perceived to be costlier than alternative security solutions [9, 19]. We unlock the potential of FHE-based solutions in the context of _private inference_ (PI). Our focus lies on the framework for PI of convolutional neural networks (CNNs). We employ the RNS-CKKS FHE scheme [5], which supports handling real and complex numbers. The PI framework involves two parties: a client requesting CNN inference on encrypted data and a server conducting the evaluation using FHE operations. The robust security offered by RNS-CKKS guarantees the non-disclosure of sensitive information, encompassing both the client's input data and the server's CNN model while securely delivering the inference results back to the client. Our contributions span various layers of computer systems, and each contribution can be utilized separately to accelerate FHE-based CNN inference in other systems. With all techniques combined, we achieve orders of magnitude higher performance for FHE-based CNN inference, reducing the encrypted inference latency of ResNet20 to a mere 1.4 seconds on a GPU and 0.03 seconds on a custom hardware design. The following list summarizes the key contributions: * We are unlocking new horizons in FHE-based PI by harnessing the latest GPU advancements and incorporating techniques derived from accelerator-based research, thereby introducing a new achievable milestone in this field. * We combine AESPA [18], a novel activation function tailored to FHE-based PI, which converts ReLU into a simple quadratic function during inference through training-time specialization. * We leverage an efficient CNN library with novel packing methods named HyPHEN [11], which reduces the computation and memory requirements of FHE-based PI of CNNs. ## 2 A GPU Library Supporting RNS-CKKS Due to the high degree of parallelism in RNS-CKKS, GPUs have a great potential for performance enhancement by utilizing the massive number of computing resources in GPUs through parallelized computation. In RNS-CKKS, the unit of computation is a polynomial in \(\mathbb{Z}_{Q}[X]/(X^{N}+1)\), which is an \(L\times N\) integer matrix where \(N\) is the degree of the polynomial and \(L\) is the maximum multiplicative level of a ciphertext. A polynomial is a huge matrix; typical values for \(L\) and \(N\) are respectively around 1-60 and \(2^{15}\)-\(2^{16}\). By using GPUs, various computations on polynomials required for RNS-CKKS can mostly be performed in parallel. For example, two polynomials can be added by \(L\times N\) parallel element-wise additions. However, the large size of the computational granularity also results in a memory bottleneck because faster memory units (e.g., L2 cache) in GPUs do not have enough capacity to accommodate it. To overcome the bottleneck, we applied memory-centric optimizations for major computation kernels, such as NTT and base conversion, in prior work [10, 14]. We have fused multiple GPU kernels to perform multiple jobs at once for each memory load and identified eligible RNS-CKKS parameter sets for GPUs (P2 and P3 in Table 1 vs. P1). Combining the latest GPU advancements and recent algorithmic optimizations, our GPU library outperforms state-of-the-art RNS-CKKS GPU studies, 100\(\times\)[10] and TensorFHE [6], in major HE operations (see Table 1). Leveraging A100 leads to improved performance in homomorphic multiplication (HMult) compared to the previous use of V100 in 100\(\times\). Algorithmic optimizations in bootstrapping (Boot), including techniques in [3], further widen the performance gap between our solution and the 100x bootstrapping. ## 3 Replacing Activation Functions with AESPA FHE-based PI implementations are often bottlenecked by activation functions such as ReLU. For example, [16, 17] utilize a high-degree polynomial approximation of ReLU to implement RNS-CKKS CNN, but their implementation is severely bottleneck by the high cost of evaluating a high-degree polynomial and frequent bootstrapping incurred by it; e.g., bootstrapping and ReLU account for 84% of the total execution time in [16]. To remedy this problem, we exploit AESPA, a novel low-degree polynomial activation function that enables replacing ReLU with a quadratic function. AESPA utilizes the orthogonal polynomial expansion of ReLU as an activation function and performs basis-wise normalization during training. For brevity, we focus on the Hermite expansion of ReLU. Let \(\hat{f}_{i}\), \(h_{i}\), and \(d\) each denote the \(i\)-th coefficient for the Hermite expansion, the \(i\)-th Hermite polynomial, and the degree of Hermite expansion; our activation function is defined as follows: \[ReLU(x)=\gamma\sum_{i=0}^{d}\hat{f}_{i}\frac{h_{i}(x)-\mu}{\sqrt{\sigma^{2}+ \epsilon}}+\beta=\gamma\sum_{i=0}^{d}\hat{f}_{i}\overline{h}_{i}(x)+\beta \tag{1}\] AESPA replaces both ReLU and batch normalization layers in a CNN. \(\gamma\) and \(\beta\) are learnable parameters and \(\overline{h}_{i}(x)\) is a batch-normalized value of \(h_{i}(x)\). Please refer to [18] for a more detailed description of AESPA. ## 4 HyPHEN: An Efficient Packing Method for FHE-based CNN ML frameworks such as PyTorch offer multiple memory formats dictating the data order of tensors (e.g., channel-last memory format). Each format requires a distinct kernel to process the same operation. This concept applies similarly in the context of FHE, where the memory format is analogous to _packing_ in FHE, although packing has more significant performance implications. Numerous studies [2, 4, 8] have proposed different packing methods for FHE-based CNN, but they did not account for bootstrapping. More recently, [16] demonstrated an FHE-based CNN implementation with bootstrapping operations. The packing method introduced in [16] utilizes a single dense packing, which can effectively minimize the number of bootstrapping. However, their method suffers from the high cost of frequent homomorphic rotation operations, which are required to maintain the fixed packing method of [16], whereas input and output ciphertexts have different data orientations in FHE-based convolution layers; rotations account for more than 83% of the total convolution time in [16]. To mitigate this inefficiency, we designed HyPHEN, an FHE-based CNN library incorporating multiple packing methods. By offering flexibility in the packing methods, we minimize the cost for rotations by choosing different packing methods before and after each convolution layer. Figure 1 depicts our construction of ResNet20. Each basic block consists of two different convolution layers (each consuming two multiplicative levels) arranged in an interleaved manner with AESPA (each consuming one multiplicative level) activation functions. We could minimize the cost of bootstrapping by placing the operation prior to the second activation function in a basic block, where the number of ciphertexts is small. Please refer to [11] for a more detailed description of HyPHEN. \begin{table} \begin{tabular}{c|r r r} \hline \hline Impl. & 100\(\times\) & TensorFlowFHE\({}^{*}\) & Ours \\ \hline GPU & V100 & A100-SXM & A100-PCIe \\ Word size & 64 bits & 32 bits\({}^{\dagger}\) & 64 bits \\ \hline HMult (P1) & 17.40 & 6.65 & 11.30 \\ HMult (P2) & 2.96 & - & 2.59 \\ HMult (P3) & 7.96 & - & 5.47 \\ Boot (P4) & 328.25 & 250.45 & 171.27 \\ Boot (P5) & 526.96 & - & 355.84 \\ \hline \hline \end{tabular} \({}^{*}\) TensorFlowFHE execution time is divided by 128 as it batches 128 operations. \({}^{\dagger}\) We were unable to reproduce a working FHE implementation with the support for bootstrapping using the 32-bit word size due to the negative impact of small word sizes on the precision of RNS-CKKS. Recently proposed composite scaling [1] can offset the precision loss, but it requires using 2\(\times\) larger \(L\) than that of 64-bit implementations for the same precision. Therefore, using the same parameter set greatly favors TensorFlowFHE. \end{table} Table 1: Execution time (ms) comparison of our GPU library vs. state-of-the-art RNS-CKKS GPU acceleration studies, 100\(\times\)[10] and TensorFlowFHE [6]. P1, P2, P3: fused, fused, and fused\({}_{\text{L}}\) parameters in Table 2 of [10]. P4, P5: paramaters in Table 3 of [10]. Figure 1: ResNet20 model built with the HyPHEN basic blocks. Each layer’s multiplicative level consumption is also shown. ## 5 Evaluation and Discussion We trained ResNet20 on the CIFAR10 dataset and ResNet18 on the ImageNet dataset using AESPA within PyTorch for both models. CNN models trained with AESPA show comparable classification accuracies to the original networks with ReLU. ResNet20 achieves an accuracy of 92.18%, whereas the original accuracy of ResNet20 is 92.15%. Similarly, ResNet18 achieves an accuracy of 69.78%, whereas the original model's accuracy is 69.75%. The latency of CNN inference for the two models is shown in Table 2. We reduce the inference latency of ResNet20 to mere 1.4 seconds, 1,622\(\times\) faster than reported in [16]. Our GPU library reduces the execution time by 21.0\(\times\) compared to that of our 64-thread CPU implementation. Applying AESPA and HyPHEN results in an additional 6.12\(\times\) speedup. Meanwhile, our optimized ResNet18 inference takes 14.7 seconds. There still exists a vast room for further improvement in FHE-based CNN performance. In particular, numerous specialized hardware accelerator designs [12, 13, 15, 20, 21] have been proposed recently. We estimated the performance of our CNN implementation on a state-of-the-art hardware accelerator proposal, SHARP [12], through simulation (see Table 2). Simulation results show that specialized hardware enables real-time FHE-based CNN inference by reducing the inference time to as low as 30 milliseconds, 75,700\(\times\) faster than the original single-threaded CPU implementation.
2304.09376
Physical Knowledge Enhanced Deep Neural Network for Sea Surface Temperature Prediction
Traditionally, numerical models have been deployed in oceanography studies to simulate ocean dynamics by representing physical equations. However, many factors pertaining to ocean dynamics seem to be ill-defined. We argue that transferring physical knowledge from observed data could further improve the accuracy of numerical models when predicting Sea Surface Temperature (SST). Recently, the advances in earth observation technologies have yielded a monumental growth of data. Consequently, it is imperative to explore ways in which to improve and supplement numerical models utilizing the ever-increasing amounts of historical observational data. To this end, we introduce a method for SST prediction that transfers physical knowledge from historical observations to numerical models. Specifically, we use a combination of an encoder and a generative adversarial network (GAN) to capture physical knowledge from the observed data. The numerical model data is then fed into the pre-trained model to generate physics-enhanced data, which can then be used for SST prediction. Experimental results demonstrate that the proposed method considerably enhances SST prediction performance when compared to several state-of-the-art baselines.
Yuxin Meng, Feng Gao, Eric Rigall, Ran Dong, Junyu Dong, Qian Du
2023-04-19T02:08:54Z
http://arxiv.org/abs/2304.09376v1
# Physical Knowledge Enhanced Deep Neural Network for Sea Surface Temperature Prediction ###### Abstract Traditionally, numerical models have been deployed in oceanography studies to simulate ocean dynamics by representing physical equations. However, many factors pertaining to ocean dynamics seem to be ill-defined. We argue that transferring physical knowledge from observed data could further improve the accuracy of numerical models when predicting Sea Surface Temperature (SST). Recently, the advances in earth observation technologies have yielded a monumental growth of data. Consequently, it is imperative to explore ways in which to improve and supplement numerical models utilizing the ever-increasing amounts of historical observational data. To this end, we introduce a method for SST prediction that transfers physical knowledge from historical observations to numerical models. Specifically, we use a combination of an encoder and a generative adversarial network (GAN) to capture physical knowledge from the observed data. The numerical model data is then fed into the pre-trained model to generate physics-enhanced data, which can then be used for SST prediction. Experimental results demonstrate that the proposed method considerably enhances SST prediction performance when compared to several state-of-the-art baselines. Sea surface temperature, physical knowledge, generative adversarial network, numerical model ## I Introduction Numerical models have been a traditional mathematical computation method for prediction of ocean dynamics. According to the statistics from the World Climate Research Program (WCRP), the research community has developed more than 40 ocean numerical models, each of which has its own advantages and characteristics. For instance, the regional ocean model system (ROMS) [1] has a powerful ecological adjoint module, the fast ocean atmosphere model (FOAM) [2] is highly effective in global coupled ocean-atmosphere studies, the finite-volume coastal ocean model (FVCOM) [3] is capable of accurately fitting the coastline boundary and the submarine topography. The hybrid coordinate ocean model (HYCOM) [4] can implement three varieties of self-adaptive coordinates. These numerical models are not interchangeable and their use depends on the specific application. It should be noted that the various processes of ocean dynamics described in numerical models are based on simplified equations and parameters due to our limited understanding of the ocean. The movements and changes in the real ocean are so diverse and complex that identifying the sources of a certain phenomenon becomes a real challenge. Therefore, searching for new relations or knowledge from historical data is of critical importance to improve the performance of numerical models in the study of ocean dynamics. In this paper, we refer to the capacity that can improve the numerical model as physical knowledge. We assume that the historical data may possess physical knowledge hitherto undiscovered. Deep learning has the remarkable ability to learn highly complex functions, transforming the original data into a much higher level of abstraction. In [5], Lecun _et al._ described the fundamental principles and the key benefits of deep learning. Recently, deep learning has been applied to a variety of tasks, such as monitoring marine biodiversity [6, 7], target identification in sonar images [8, 9] and sea ice concentration forecasting [10]. For example, Berman _et al._[6] employed convolutional neural networks (CNNs) to classify spectrograms generated from sperm whale acoustic data. Allken _et al._[7] developed a CNN model for fish species classification, leveraging synthetic data for training data augmentation. Lima _et al._[8] proposed a deep transfer learning method for automatic ocean front recognition, extracting knowledge from deep CNN models trained on historical data. Xu _et al._[9] presented an approach combining deep generation networks and transfer learning for sonar shipwrecks detection. Ren _et al._[10] proposed an encoder-decoder framework with fully convolutional networks that can predict sea ice concentration one-week in advance with high accuracy. Through the application Fig. 1: Conceptual comparison of numerical model and the proposed method on sea surface temperature (SST) prediction. (a) Numerical model. (b) Proposed method for SST prediction. Generative adversarial network is used to transfer the physical knowledge from the historical observed data to the numerical model, and therefore improved the SST prediction performance. of deep learning-based methods to ocean research, significant improvements have been achieved in terms of classification and prediction performance. Due to the incomplete physical knowledge in numerical models and the weak generalization performance of neural networks, thereasome efforts to improve prediction performance by combining the advantages of numerical model and neural networks. In geographical science, this can be achieved in three different ways [11]: _1) Learning the parameters of the numerical model through neural networks._ Neural networks can optimally describe the observed scene from the detailed high-resolution model, but many parameters are difficult to deduce, making their estimation challenging. Brenowitz _et al._[12] trained a deep neural network based on unified physics parameterization and explained the influence of radiation and cumulus convection. _2) Replacing the numerical model with a neural network._ In this way, the deep neural network architecture can capture the specified physical consistency. Pannekoucke _et al._[13] translated physical equations into neural network architectures using a plug-and-play tool. _3) Analyzing the output mismatch between the numerical model and observation data._ Neural networks can be used to identify, visualize, and understand the patterns of the model inaccuracies, and dynamically correct the deviation of the model. Patil _et al._[14] applied the discrepancy between the results of the numerical model and the observational data to train a neural network to predict the sea surface temperature (SST). Ham _et al._[15] trained a convolutional neural network based on transfer learning. They first train their model on the numerical model data, and then using reanalysis data to calibrate the model. However, the third approach has been found to suffer from a long-term bias problem, where the prediction performance deteriorates as the prediction days increase. To address the above issues, in this study, we use the generative adversarial networks (GANs) to transfer the physical knowledge from the historical observed data to the numerical model data, as illustrated in Fig. 1. Different from traditional numerical model, the proposed method can correct the physical part in the numerical model data to improve the prediction performance. To be specific, as illustrated in Fig. 2, we first acquired the physical feature from the observed data by using a prior network model composed of an encoder and GAN. Thereafter, we obtained the physics-enhanced SST by feeding the numerical model data to the pretrained model. Following that, the physics-enhanced SST were adopted to train a spatial-temporal model for predicting SST. Meanwhile, we performed ablation experiments to take full advantage of the new generated data. The main contributions of this paper are threefold: * To the best of our knowledge, we are the first to transfer physical knowledge from the historical observed data to the numerical model data by using GANs for SST prediction. * The difference between the enhanced data based on physical knowledge and the predicted results were exploited to adjust the weight of the model during training. * The experimental results indicate that our proposed method can cover the shortage of physical knowledge in the numerical model and improve the prediction accuracy. The rest of the paper is organized as follows. Section II introduces the literature review related to our method, while our method design is detailed in Section III. Then the experimental results are shown in Section IV. Section V finally concludes this paper. ## II Background ### _Generative Adversarial Network_ In 2014, Goodfellow _et al._[16] put forward a novel framework of generative model trained on an adversarial manner. In their method, a generative model G and a discriminative model D were trained simultaneously. The model G was applied to indirectly capture the distribution of the input data through model D and generate similar data. While model D estimates the probabilities that its input samples came from training data instead of model G. The training process of G was driven by the probability errors of D. In this adversarial process, G and D guide the learning and gradually strengthen each other's ability to achieve outstanding performance. GANs have been applied in physical-relevant tasks. For example, Yang _et al._[17] applied physics-informed GANs to deal with high-dimensional problems and solved stochastic differential equations, Litjens _et al._[18] produced more realistic coastal flood data by using GANs to learn the features in the numerical model data, Zheng _et al._[19] inferred the unknown spatial data with the potential physical law that is learned by GANs. However, these works performed their model by using GAN to replace the entire numerical model, which is quite different from our work. In this paper, we adopt GAN model to transfer the physical knowledge from the observed data to the numerical model data, in order to correct and improve the physical feature in the numerical model. In addition, existing methods only learn a deterministic model without considering whether the code generated by the encoder is in accordance with the semantic knowledge learned by the GAN. ### _Convolutional Long Short-Term Memory_ In 2015, ConvLSTM [20] was proposed to solve the precipitation nowcasting. The network structure of ConvLSTM is able to capture local spatial features as in classical convolutional neural networks (CNN) [21] while building a sequential relationship, inherited from Long Short-Term Memory (LSTM) blocks. Moreover, the authors conducted experiments to show that ConvLSTM is able to perform better than LSTM on spatial-temporal relationship. Apart from weather prediction tasks, ConvLSTM can be applied to various spatial-temporal sequential prediction problems, for example, action recognition [22, 23]. ### _Sea Surface Temperature Prediction_ Lins _et al._[24] investigated SST in tropical Atlantic using an SVM. Patil _et al._[25] adopted an artificial neural network to predict the sea surface temperature. It performs well only in the case of forecasting with the lead time from 1 to 5 days and then the accuracy declined. Zhang _et al._[26] applied LSTM to predict SST. Yang _et al._[27] predicted SST by building a fully connected LSTM model. From another perspective, Patil _et al._[28] used a wavelet neural network to predict daily SST, while Quala _et al._[29] proposed patch-level neural network method for SST prediction. However, these methods only rely on data and ignore the physical knowledge behind them. Ham _et al._[15] adopted transfer learning to predict ENSO and classify them. In this work, we conduct comparative experiments and the results point out that our method reduces the short-term errors as well as the long-term bias. ### _Data Augmentation_ Shorten _et al._[30] reviewed recent techniques of image data augmentation for deep learning. The purpose of data augmentation is to enhance the representation capability of neural networks and learn the distribution of original data better. In recent years, two kinds of data augmentation techniques have been commonly used: data transformation and resampling. The data transformation approach includes geometric transformation [31], color space transformation [32, 33, 34], random erasing [35, 36, 37], adversarial training [38, 39, 40, 41] and style transfer [42, 43, 44, 45]. The resampling technique lays particular emphasis on new instance composition, such as image mixup [46, 47, 48], feature space enhancement [49, 50] and generative adversarial network (GAN) [16]. Geometric transformation can acquire nice performance, such as image flip, crop, rotation, translation, and noise injection [51]. The experimental results in [30] showed that the random cropping technique performed well. Color space transformation suffers from a large memory consumption and long computing time. Random erasing techniques can improve the network robustness in occlusion cases by using masks. Although adversarial training can also improve robustness, the finite number of natural adversarial samples largely limits the network performance in practice. The neural style transfer approach is only effective for specific tasks, while its practical application is limited. The feature space augmentation implements the capability of interpolating representations in the feature space. GAN-based augmentation techniques have been applied to achieve current state-of-the-art network performance [52]. However, there does not exist an effective data augmentation method that could exploit the merits of the numerical model and deep learning. In this paper, we aim to propose a novel data enhancement technique based on physical knowledge. The proposed technique achieves better performance than GAN-based augmentation. ## III Proposed Method Numerical model can predict the spatial distribution of SST and its global teleconnections together. It performs well at short-leads for SST prediction. Nevertheless, we argue that transfer the physical knowledge from the observed data can further improve the performance of numerical model for SST prediction. To this end, we adopt GANs to learn the physical knowledge in the observed data. Zhu _et al._[53] proposed a GAN inversion method that not only faithfully reconstructs the input data, but also ensures that the inverted latent code is semantically meaningful. They demonstrated that learning the pixel values of the target image alone is insufficient, and that the learned features are unable to represent the image at the semantic level. Inspired by this work, we design an encoder in GAN to learn physical knowledge from the observed data, referred to as the prior network. This prior network not only learns the pixel values of the target observed data, but also captures the physical information. It effectively improves the SST prediction accuracy. Next, we present the proposed method as follows: 1) Overview of the method, 2) Prior network, 3) SST prediction with enhanced data. Fig. 2: Illustration of the proposed SST prediction method. It consists of two stages: Prior network training and SST prediction with enhanced data. In the first stage, a prior network is trained to generate physics-enhanced SST. In the second stage, the physics-enhanced SST are used for SST prediction via ConvLSTM. ### _Overview of the Method_ In this subsection, we summarize the proposed SST prediction method and describe the input and output of each stage in detail. As illustrated in Fig. 2, the proposed SST prediction method consists of two stages: Prior network training and SST prediction with enhanced data. **1) Prior network training.** This stage consists of three steps. In the first step, the observed SST (GHRST data) is used for GAN model training. In the second step, the pretrained generator and GHRSST data are used to train the encoder. In the third step, the pretrained generator and encoder are combined into the prior network. The prior network is used to transfer the physical knowledge from the observed data to the numerical model. The numerical model SST (HYCOM data) is then fed into the prior network to enhance its feature representations. **2) SST prediction with enhanced data.** The physics-enhanced data are fed into ConvLSTM model for SST prediction. The SST of the next day, next 3 days, and next 7 days are predicted separately. It should be noted that most existing works [26][27] only use the observed data for ConvLSTM training. By contrast, our method takes advantage of physics-enhanced data for ConvLSTM training. Next, we describe prior network training and SST prediction with enhanced data in details. ### _Stage 1: Prior Network Training_ We construct a prior network to learn the physical knowledge in the observed data and keep its semantic/physical information constant after training. As illustrated in Fig. 2, the prior network training is comprised of three steps: GAN model training, encoder training, and physics-enhanced data generation. Next we provide detailed descriptions of each step. **GAN Model Training.** The GAN model is used to learning the data distribution from the observed SST. The objective function is as follows: \[\min_{G}\max_{D}L(D,G)= \operatorname*{\mathbb{E}}_{m\sim p_{data}(m)}[\log D(m)]\] \[+\operatorname*{\mathbb{E}}_{z\sim p_{z}(z)}[\log(1-D(G(z)))], \tag{1}\] where \(z\) represents the random vector fed into the generator \(G\), and \(m\) refers to the observed SST data. The generator \(G\) aims to capture the data distribution from the observed SST data. The discriminator \(D\) aims to estimate the probability that whether the input sample comes from real data or from the generator \(G\). Here \(p_{data}\) denotes to the observed data distribution, and \(p_{z}\) denotes the noise distribution. Through adversarial training, GAN captures the physical information of the observed SST data, and hence well-trained generator \(G\) can produce high quality SST data. The training process of the GAN model is summarized in Algorithm 1. We train the model over the observed SST until the generator \(G\) captures the physical features from the observed SST data. **Encoder Training.** The observed SST data are fed into an encoder \(E\) to generate latent code \(Z^{\text{enc}}\). Through adversarial training, the encoder \(E\) captures the semantic/physical information from the observed SST. It should be noted that the parameters of the generator \(G\) are fixed, and the discriminator \(D\) aims to identify the real samples from the generated samples. \(D\) and \(E\) are trained as follows: \[\min_{\Theta_{E}}L_{E}= ||m-G(E(m))||_{2}\] \[-\lambda_{adv}\operatorname*{\mathbb{E}}_{m\sim p_{data}}[D(G(E(m )))]\] \[+\lambda_{vgg}||F(m)-F(G(E(m)))||_{2}, \tag{2}\] \[\min_{\Theta_{D}}L_{D}= \operatorname*{\mathbb{E}}_{m\sim p_{data}}[D(G(E(m)))]\] \[-\operatorname*{\mathbb{E}}_{m\sim p_{data}}[D(m)]\] \[+\frac{\gamma}{2}\operatorname*{\mathbb{E}}_{m\sim p_{data}}[|| \nabla_{m}D(m)||_{2}^{2}], \tag{3}\] where \(F(\cdot)\) represents feature extraction via the VGG network. VGG network stands for the network proposed by Visual Geometry Group [54], and it is a classical deep convolutional neural network. The encoder training is described in Algorithm 2. The parameters of the generator \(G\) are fixed, while the parameters of encoder \(E\) and discriminator \(D\) are updated based on Eq. 2 and Eq. 3, respectively. **Physics-Enhanced Data Generation.** The well-trained encoder \(E\) and generator \(G\) are combined into the prior network. Numerical model data are fed into the prior network to generate physics-enhanced data, in which the incorrect components would be restored. The motivation of Stage 1 is to construct a prior network which can rectify the incorrect components in the numerical model data. To this end, we firstly devise a GAN model which captures the data distribution from the observed SST and can generate high-quality SST data. Subsequently, the encoder is trained to guarantee that the generated latent codes preserve the semantic/physical information in the observed SST. We argue that through adversarial learning, the prior network (consisting of the encoder and generator) can rectify the incorrect parts in the input data, since the physical knowledge has been embedded in the prior network. Consequently, in the third step, when the numerical model data are fed into the prior network, the embedded physical knowledge can correct the incorrect components in the numerical model data. ``` 0: Random noise vector \(z\), observed training data \(m\), numbers of epochs \(n_{1}\) 0: Generator \(G\) and discriminator \(D\) 1:while not converged do 2:for\(t=0,\cdots,n_{1}\)do 3: Sample image pair \(\{z^{i}\}_{i=1}^{N}\) and \(\{m^{i}\}_{i=1}^{N}\); 4: Update \(D\) by gradient descend based on Eq. 1; 5: Update \(G\) by gradient descend based on Eq. 1; 6:endfor 7:endwhile ``` **Algorithm 1** : GAN Model Training ### _Stage 2: SST Prediction with Enhanced Data_ ConvLSTM is an effective tool for predicting spatial-temporal data. It is a recurrent neural network that incorporates convolutional blocks in both the input-to-state and state-to-state transitions. Unlike the traditional LSTM layer, ConvLSTM not only preserves the sequential relationship but also extracts spatial features from the data. In this way, we can leverage it to capture robust spatial-temporal features. The objective function of ConvLSTM is formulated as follows: \[i_{t}= \sigma(W_{xi}*X_{t}+W_{hi}*H_{t-1}+W_{ci}\circ C_{t-1}+b_{i})\] \[f_{t}= \sigma(W_{xf}*X_{t}+W_{hf}*H_{t-1}+W_{cf}\circ C_{t-1}+b_{f})\] \[C_{t}= f_{t}\circ C_{t-1}+i_{t}\circ tanh(W_{xc}*X_{t}+W_{hc}*H_{t-1}+b_{ c})\] \[O_{t}= \sigma(W_{xo}*X_{t}+W_{ho}*H_{t-1}+W_{co}\circ C_{t}+b_{o})\] \[H_{t}= O_{t}\circ tanh(C_{t}), \tag{4}\] where \(*\) denotes the convolution operation, \(\circ\) denotes the Hadamard product, \(W\) and \(b\) are the corresponding weights and bias respectively, \(H_{t-1}\) and \(X_{t}\) are previous output and current input respectively. The input gate \(i_{t}\), forget gate \(f_{t}\) and output gate \(O_{t}\) aim to protect and control the cell state \(C_{t}\). The three-dimensional tensors \(X_{t}\), \(i_{t}\), \(f_{t}\), \(C_{t}\), \(O_{t}\) and the two-dimensional matrix \(H_{t}\) refer to the spatial information. The physics-enhanced SST data are fed into the ConvLSTM model for SST prediction as follows: \[L_{CL}=\|\text{ConvLSTM}(Ph_{i-t},...,Ph_{i-1})-Ph_{i}\|_{2}, \tag{5}\] where \(CL\) refers to the ConvLSTM model. The physics-enhanced data are marked as \(Ph\). The past \(t\) days' data from physics-enhanced data are fed into the ConvLSTM and the output is compared with \(Ph_{i}\). Here \(t\) denotes the number of past days used for prediction. It is a critical parameter that may affect the SST prediction performance. Comprehensive analysis of \(t\) can be found in Section IV. B. The weights obtained by the generator are reused in Algorithm 2, where only the generator weights are fixed. The introduced encoder and the discriminator go through another training process over the observed SST. Their weights are updated based on Eq. 2 and Eq. 3, respectively. After training, the code generated by the encoder would embody the learned physical knowledge. ``` 0: Physical numerical model training data \(Ph\), number of epochs \(n_{3}\) 0: Convolutional LSTM \(ConvLSTM\) 1:while not converged do 2:for\(t=0,\cdots,n_{3}\)do 3: Sequence sample image pair \(\{Ph^{i}\}_{i=1}^{N}\); 4: Update \(ConvLSTM\) by gradient descend based on Eq. 5; 5:endfor 6:endwhile ``` **Algorithm 3** : SST Prediction with Enhanced Data Finally, we acquire the data reinforced based on physical knowledge using the above pre-trained model. The weights of the generator and the encoder from Algorithm 2 are reused and the numerical model SST is exploited to produce physics-reinforced numerical model data. In Algorithm 3, the physical knowledge-enhanced data are leveraged to train a spatial-temporal ConvLSTM model for SST prediction. In this paper, the SST of the next day, the next 3 days and the next 7 days are predicted separately. For this part, we conducted an ablation study to make use of the reinforced data effectively. ## IV Experimental Results and Analysis ### _Study Area and Experiment Settings_ The South China Sea is located in the Western Pacific Ocean, in southern mainland China. Its area is about 3.5 million square kilometers with an average depth of 1, 212 meters. In this paper, the selected study area is (\(3.99^{\circ}\)N\(\sim\)\(24.78^{\circ}\)N, \(98.4^{\circ}\)E\(\sim\)\(124.4^{\circ}\)E). We use the high resolution satellite remote sensing data from GHRSST (Group for High Resolution Sea Surface Temperature) [55] as the observed data. GHRSST provides a variety of sea surface temperature data, including satellite swath coordinates, gridded data, and gap-free gridded products. Herein, we have employed gap-free gridded products, which are generated by combining complementary satellite and in situ observations within an Optimal Interpolation framework. The HYCOM [56] is selected as the numerical model. Their spatial resolutions are 1/20\({}^{\circ}\times\)1/20\({}^{\circ}\) and 1/12\({}^{\circ}\times\)1/12\({}^{\circ}\), respectively. The temporal resolution is one day. The data from May 2007 to December 2013 are used for training, while the remaining data from January 2014 to December 2014 are used for testing. It should be noted that we use cloudless data provided by GHRSST. The data were captured by microwave instruments which can penetrate through clouds. Hence, the data have full coverage of the study area. In addition, the accurate time of every pixel in GHRSST SST product is the same. The Z-score standardization was utilized for preprocessing as: \[z=\frac{x-\mu}{\sigma}, \tag{6}\] where \(x\) denotes the GHRSST and HYCOM model SST, \(z\) denotes the normalized data, \(\mu\) and \(\sigma\) denote the mean value and standard deviation respectively. We converted the data into \(256\times 256\) square-shaped heat maps. More specifically, the GHRSST data and 512-dimensional random vector are utilized in the first step of prior network training. The size of the input GHRSST data is \(N\times H\times W\), where \(N\) represents the batch size, \(H\) indicates the height of the input data, and \(W\) denotes the width of the input data. For the second stage of prior network, we only employ GHRSST data for encoder training. The sizes of inputs and outputs for both stages are \(N\times H\times W\). Similarly, in the third step of prior network training, the HYCOM SST data is fed into the pretrained model. Here, the sizes of both the inputs and the outputs are \(N\times H\times W\). In our implementations, we set \(N\) to 2430, while \(H\) and \(W\) are both set to 256. We conducted extensive experiments on an NVIDIA GeForce 2080Ti with 8 GPUs. The prior network uses the same network structure and configuration as mentioned in [53] to acquire the physical knowledge from the historical observed data. Then the obtained physical knowledge is transferred to the numerical model data for the sake of restoring and improving the incorrect components in the numerical model. The configuration for the ConvLSTM model used in this paper is the same as the ConvLSTM model in Shi's work [20]. The GHRSST SST dataset is utilized as the benchmark for comparison and assessment in this paper. ### _Influences of the Past Day Number for SST Prediction_ As mentioned in Section III. C, \(t\) denotes the number of past days used for prediction. It is a critical parameter that may affect the SST prediction performance. In this paper, we attempt to predict the next one-day, three-day and seven-day's SST. We implemented extensive experiments to find the proper number of past days for the future SST prediction. The Root Mean Square Error (RMSE) and the coefficient of determination (\(R^{2}\)) are applied as the evaluation criteria. Lower RMSE and higher \(R^{2}\) values indicate more accurate results. Table I lists the prediction results for the next day by using the past one day, three days and five days' data, separately. It can be observed that the proposed model performs best when using the past five days' data, where the RMSE and the \(R^{2}\) results are 0.3618 and 0.9967 respectively. They are slightly better than the other schemes. Compared to the other two schemes, the RMSE and \(R^{2}\) values improve by 0.0086, 0.001 and 0.0028, 0.0006. Hence, the past five days' data are adopted for the next one-day SST prediction. We analyze the influences of \(t\) for the next three-day SST prediction in Table II. It can be seen that the longer historical data was used, the better prediction performance was achieved. The RMSE value using the past seven days' data achieves the best performance. It is improved by 0.0025 compared to that using the past five days' data. Meanwhile, the \(R^{2}\) performs the best using the past seven days' data compared to the other two schemes. Therefore, the past seven days' data were used for the next three-day SST prediction. The experimental results of the next seven-day SST prediction is illustrated in Table III. As can be seen that the prediction results using the past ten days' data achieves the best performance. Therefore, we exploit the past ten days' data for the next seven-day SST prediction. Fig. 3: Illustration of three models used in ablation study. (a) Scheme A: The numerical model SST data are first fed into the ConvLSTM, and then the output are fed into the well-trained prior network. The sequence of prior network and ConvLSTM is replaced. (b) Scheme B: The prior network has not been well-trained. Specifically, GAN model training in prior network has been omitted. (c) The proposed method. ### _Ablation Study_ To verify the effectiveness of the prior network and GAN training, we conduct ablation experiments. As illustrated in Fig. 3, two variants are designed for comparison as follows: * _Scheme A_. The sequence of prior network and ConvLSTM is replaced. The numerical model SST data are first fed into the ConvLSTM, and then the output are fed into the well-trained prior network. * _Scheme B_. The prior network has not been well-trained. Specifically, GAN model training (the first step in Fig. 2) in prior network training has been omitted. The experimental results are shown in Table IV. As can been seen that our method achieves the best RMSE and \(R^{2}\) values. Specifically, the proposed method outperforms Scheme A, which demonstrates that the correct sequence of prior network and ConvLSTM can boost the SST prediction performance. It is evident that the prior network effectively restores the incorrect components of the numerical model data, and the restored data perform better in SST prediction. Futhermore, the proposed method has superior performance over Scheme B, which demonstrates that the GAN modeling is an essential step. GAN modeling can learn the data distribution of the observed SST, and helps the prior network capture better physical information from the observed SST. To sum up, in the proposed method, we use adversarial learning for prior network pretraining, which can effectively transfer physical knowledge from the observed SST data to the prior network. It can guide fast training convergence, and improve the SST prediction performance. roughly evenly distributed near the red line. Fig. 8 and Fig. 9 are scatter plots of the prediction results for the next three days and the next seven days, respectively. The scatter plots demonstrate the effectiveness of the proposed method for SST prediction. In order to verify the effectiveness of the proposed method, we compare the proposed method with seven closely related methods: ConvLSTM [20], Hybrid-NN [14], Hybrid-TL [15], Gen-END [57], VAE-GAN [58], Tra-NM, and Tra-ASL. The study area is (\(3.99^{\circ}\)N\(\sim\)\(24.78^{\circ}\)N, \(98.4^{\circ}\)E\(\sim 124.4^{\circ}\)E) for these methods. All of these methods used the training data from the past 5 days for the next 1-day prediction, the data from the past 7 days for the next 3-day prediction, and the data from the past 10 days for the next 7-day prediction. ConvLSTM is discussed in Section III. C, and it is an effective spatial-temporal model for SST prediction. Hybrid-NN utilizes the discrepancy between the observed data and the numerical model data to guide the training of deep neural networks. Hybrid-TL combines the advantages of numerical models and neural networks through transfer learning. Gen-END is a generative encoder that can be used for SST prediction. VAE-GAN integrates variational autoencoder and GAN, and it can capture high-level semantic features for SST prediction. HYCOM SST data are used to train the ConvLSTM model for the next 1-day, 3-day, and 7-day prediction (termed as Tra-NM). Tra-ASL is a traditional assimilation method and it exploits the correlations among multiple types of data (observed data and numerical model data). The GHRSST data is first utilized to train a ConvLSTM model, which serves as the baseline. It is a widely used data-driven approach for SST prediction. Hybrid-NN, Hybrid-TL, Gen-END, and VAE-GAN employ the GHRSST and HYCOM data for training. The HYCOM assimilation data [56] Fig. 8: Scatter plot comparing the next three-day SST prediction with the corresponding observed data Fig. 7: Scatter plot comparing the next one-day SST prediction with the corresponding observed data Fig. 9: Scatter plot comparing the next seven-day SST prediction with the corresponding observed data are used here, with a spatial resolution of 1/12\({}^{\circ}\times\)1/12\({}^{\circ}\). Our method improves and rectifies the incorrect components in the numerical model data by introducing physical knowledge from the historical observed data. The corrected numerical model data is referred to as the physics-enhanced data. To compare with the physics-enhanced data, HYCOM assimilation data (Tra-ASL) and HYCOM data (Tra-NM) are similarly used to train the ConvLSTM model. The training times of ConvLSTM, Hybrid-NN, Tra-NM, and Tra-ASL for the next 1-day, 3-day, and 7-day predictions are 1.8, 4.4, and 8.2 hours, respectively. The Hybrid-TL method trained the ConvLSTM model twice, and the training duration is 3.6, 8.8, and 16.4 hours for the three tasks, respectively. The VAE-GAN requires 181.6, 184.2, and 188.4 hours for training, while the Gen-END method requires almost the same amount of time, with 196.8, 199.3, and 203.2 hours for three SST prediction tasks, respectively. The results for the next 1-day, 3-day, and 7-day SST prediction are presented in Table V. It is evident that the Tra-NM method yields unsatisfactory results compared to the other methods. This is likely due to the incorrect components in the HYCOM data, which adversely affect the SST prediction performance. The Hybrid-NN method also performs poorly, as its average RMSE values are the second lowest among the models. The Hybrid-TL model performs better than the ConvLSTM for the next 1-day SST prediction, but not for the other two tasks. Our method achieves the best RMSE values and highest \(R^{2}\) values. Compared to the ConvLSTM model, the average RMSE values of our method are effectively improved. It demonstrates that introducing physical knowledge Fig. 11: Visualized results for the next three-day SST prediction. The first column shows the results of predicted SST. The ground truth observed SST data are shown in the second column. We present their difference in the third column. Fig. 10: Visualized results for the next one-day SST prediction. ## References * [1] J. A. B. Carter, and J. A. B. Carter, "The 2010-2010 challenge: A survey of the 2010-2010 challenge," in _Proceedings of the 2010-2010 challenge, 2010. [MISSING_PAGE_POST] from the observed data can restore the incorrect components in the numerical model data, thus improving the SST prediction accuracy. Fig. 10 presents the visualized results for the next one-day SST prediction, the observed SST data, and their differences, respectively. It can be seen that the predicted results are highly similar to the observed SST data across the entire region of the South China Sea. Fig. 11 displays the visualized results for the next three-day SST prediction. It is observed that there are some significant difference values in the Gulf of Tonkin and in other marginal areas of the South China Sea. Fig. 12 illustrates the visualized results for the next seven-day SST prediction. It is found that the major difference values mainly concentrate on the Gulf of Tonkin for the next seven-day prediction, and they are larger than the results for the two other tasks. ### _Limitation and Discussion_ From Fig. 7 to Fig. 9, it can be observed that there are some inaccuracies in the mid-range SST, which are visualized in Fig. 13. Bright pixels indicate large SST prediction errors, whereas dark pixels denote accurate SST predictions. As can be seen, these points are mainly located on the northwestern part of Taiwan Strait, where the predicted sea surface temperature is lower than the observed data. The prediction error is mainly caused by the ConvLSTM model and the land mask. In our implementations, the land mask is applied to the study area. The ConvLSTM exploits the spatial and temporal features of the whole study area. The features of the northwestern part of Taiwan Strait are affected by the land mask to some extent, and therefore result in prediction errors. If higher resolution training data could be obtained, the accuracy of predictions in this region would be further improved. In Fig. 11 and 12, it can be seen that there is no significant increase in errors with the lead day. This may be due to the fact that our method uses a sufficient amount of training data, and the deep neural networks are able to effectively capture the temporal features. Furthermore, the persistence of SST is also an important factor. ## V Conclusions and Future Work In this paper, we present a SST prediction approach based on physical knowledge correction, which utilizes historical observed data to refine and adjust the physical component in the numerical model data. Specifically, a prior network was employed to extract physical knowledge from the observed data. Subsequently, we generated physics-enhanced SST by applying the pretrained prior network over numerical model data. Finally, the generated data were used to train the ConvLSTM network for SST prediction. Additionally, the physical knowledge-based enhanced data were leveraged to train the ConvLSTM network, which further improved the prediction performance. The proposed method achieved the best performance compared to six state-of-the-art methods. Although the physical part of the numerical model data has been corrected by our proposed method, the prediction performance could be further improved if an interpretable model is employed. In the future, we plan to extract more pertinent knowledge from the deep networks, and then design interpretable models more suitable for practical applications.
2305.03200
Employing Hybrid Deep Neural Networks on Dari Speech
This paper is an extension of our previous conference paper. In recent years, there has been a growing interest among researchers in developing and improving speech recognition systems to facilitate and enhance human-computer interaction. Today, Automatic Speech Recognition (ASR) systems have become ubiquitous, used in everything from games to translation systems, robots, and more. However, much research is still needed on speech recognition systems for low-resource languages. This article focuses on the recognition of individual words in the Dari language using the Mel-frequency cepstral coefficients (MFCCs) feature extraction method and three different deep neural network models: Convolutional Neural Network (CNN), Recurrent Neural Network (RNN), and Multilayer Perceptron (MLP), as well as two hybrid models combining CNN and RNN. We evaluate these models using an isolated Dari word corpus that we have created, consisting of 1000 utterances for 20 short Dari terms. Our study achieved an impressive average accuracy of 98.365%.
Jawid Ahmad Baktash, Mursal Dawodi
2023-05-04T23:10:53Z
http://arxiv.org/abs/2305.03200v1
# Employing Hybrid Deep Neural Networks on Dari Speech ###### Abstract This paper is an extension of our previous conference paper. In recent years, there has been a growing interest among researchers in developing and improving speech recognition systems to facilitate and enhance human-computer interaction. Today, Automatic Speech Recognition (ASR) systems have become ubiquitous, used in everything from games to translation systems, robots, and more. However, much research is still needed on speech recognition systems for low-resource languages. This article focuses on the recognition of individual words in the Dari language using the Mel-frequency cepstral coefficients (MFCCs) feature extraction method and three different deep neural network models: Convolutional Neural Network (CNN), Recurrent Neural Network (RNN), and Multilayer Perceptron (MLP), as well as two hybrid models combining CNN and RNN. We evaluate these models using an isolated Dari word corpus that we have created, consisting of 1000 utterances for 20 short Dari terms. Our study achieved an impressive average accuracy of 98.365%. Dari, deep neural network, speech recognition, recurrent neural network, multilayer perceptron, convolutional neural network ## 1 Introduction Humans communicate with each other using speech, and Automatic Speech Recognition (ASR) tools have greatly facilitated human-machine interaction with advancements in technology and the widespread use of smartphones. ASR systems have numerous applications, including machine-human dialogue systems, robotics, translators, and more. However, research on ASR for many regional and Asian languages still faces many challenges, such as a lack of resources and different dialects. Dari, one of Afghanistan's official languages, is spoken as a first or second language by about 32 million people worldwide. Although many Dari speakers live outside Afghanistan, they often face communication barriers due to their limited access to modern technology devices and applications. Furthermore, research on ASR for Dari and other regional languages is still in its infancy, with only a handful of studies to date. This article aims to develop a more robust and less error-prone ASR system for Dari by employing three different deep learning techniques, including Multilayer Perceptron (MLP), Convolutional Neural Network (CNN), Long Short-term Memory (LSTM), a hybrid model of CNN and LSTM, and a hybrid model of CNN and Bidirectional LSTM (BLSTM), to develop Dari isolated words ASR. We then compare the performance of the different models to evaluate their efficiency. This study is novel in the field of Dari speech recognition as it compares the effectiveness of five state-of-the-art deep learning approaches. To the best of our knowledge, no published article has compared the performance of different models on Dari ASR. The primary challenge in developing an ASR system for Dari is the lack of available datasets and the vast variety of dialects. We address this challenge by developing and using our built-in isolated Dari words corpus that consists of 1000 utterances for 20 short Dari terms. Authors by Dawodi et al. [3] established an ASR system for Dari using MFCC for feature extraction and CNN for word classification, achieving an average accuracy of 88.2%. However, the current study builds upon their work by implementing three additional deep learning techniques and achieves more than 10% higher accuracy. The next section will discuss some related works in this field, and Section 3 will briefly introduce the corpus structure. Subsequently, sections 4 to 6 will describe an overview of Dari speech recognition, including MFCC feature extraction and deep neural network models. The result and discussion on this work will be illustrated in sections 7. Finally, section 8 will conclude this article and report future work. Additionally, the authors will clarify that this paper is an extension of their previous conference paper. ## 2 Related Works Several studies have used machine learning and NLP techniques for speech recognition tasks, with recent focus on deep learning techniques for developing ASR systems. However, most studies have focused on non-regional languages such as English. Mitra et al. [8] proposed a hybrid convolution neural network (HCNN) for modeling acoustic and articular spaces. The HCNN model showed better performance than CNN/DNN, with a lower word error rate. Similarly, Grozdic and Jovicic [6] developed a new framework based on the automatic deep decoder encoder coefficient and Teager energy storage coefficients. They showed that Teager energy-based cepstral properties are more powerful than MFCC and GMM-HMM in describing whispers, with 31% higher accuracy in the whisper scenario and 92.81% word recognition rate. During the recent decade, some studies focused on more regional languages. An ASR for the Panjabi language was developed by Dua et al. [4] that used MFCC for features extraction and HTK toolkit for recognition. They prepared the dataset containing 115 Panjabi terms by the utterance of 8 Punjabi native speakers. The overall mean accuracy achieved was between 94%-96%. Bakht Zada and Rahimullah [17] developed a Pashto isolated digit ASR to detect Pashto digits from zero to nine. They used MFCC to extract features and CNN to classify digits. The utilized dataset consisted of 50 sounds for each number. Their proposed model contained four convolutional layers, followed by ReLU and max-pooling layers. Subsequently, they obtained an average accuracy of 84.17%. Dahl et al. [2] developed a novel context-dependent model for speech recognition using deep neural network techniques. They proved that their proposed model is superior to previous context-dependent methods. Similarly, Abdel-Hamid [1] used CNN in the speech recognition context and proved its efficiency in decreasing the error-rate and increasing robustness. Graves et al. [5] proposed a hybrid model that involved bidirectional Long Short-term Memory (LSTM) RNNs and weight noise. They evaluate their model on the TIMIT phoneme recognition benchmark. As a result, the error rate decreased by up to 17.7%. Few recently published articles focus on ASR for the Persian language using different models. Sameti et al. [12] implemented a Persian continuous speech recognition system. They used MFCC with some modification to learn features of speech signals and model-based techniques, speech enhancement approaches like spectral subtraction, and Wiener filtering to gain desirable robustness. Likewise, Hasanabadi et al. [7] used a database containing Persian isolated words and developed Persian ASR in 2008. Finally, they create a wheeled mobile robot to navigate using Persian spoken commands. They investigated simple Fast Fourier Transform (FFT) to catch attributes and MLP to classify patterns. S. Malekzadeh et al. used MFCC for extracting features from Persian speeches and MLP for detecting vowel and consonant characters. The used dataset involved 20 categories of acoustics from utterances of 10 people. The proposed model demonstrated 61% - 87% conversion accuracy. S. Malekzadeh utilized a deep MLP deep model to recognize Persian sounds to improve voice signal processing. Similarly, [9] established a Persian letter and word to sound system. They utilized rule-based for the first layer and MLP for the other layers of the network model. The average accuracy of 60.7% and 83% were obtained for letter and word predictions respectively. Recently, Veisi and Mani [15] used a hybrid model of deep belief network (DBN) for extracting features and DBLSTM with Connectionist Temporal Classification (CTC) output layer to create the acoustic model. The study indicates that DBLSTM provides higher accuracy in Persian phoneme recognition compared to the traditional models. This paper presents the design of an ASR for Dari isolated words. The works presented in this paper is based on three different deep learning techniques. ## 3 Dari Word Corpus In this research, we utilized a dataset that we had previously created [3]. The dataset consists of 1000 sounds, representing 20 short Dari terms spoken by 20 Dari native speakers of both genders and different dialects. The speakers recorded the utterances in a noise-free environment at their homes and offices using a smartphone audio recorder. The recorded sounds were then transferred to a PC using a USB cable, and we used Adobe Audition software to remove background noise and limit the length of each file to one second. Finally, we saved all audio files in the.wav extension format, with each word saved in a separate file bearing its name. All utterance files related to a single term are located within the same folder; for instance, all audio records associated with the "Salaam" term from all speakers are stored in the "fold 20" folder. As a result, there are 20 separate folders, each containing the utterances relevant to a single term. We ended up with 1000 usable utterances, as some of the files were corrupted. ## 4 Dari ASR System This study represents a state-of-the-art approach to Dari ASR. The proposed system uses audio signals as inputs, MFCC to extract features from voice signals, and five separate deep learning models to recognize utterances. Additionally, we divided the dataset into training and testing sets with different ratios to evaluate the performance of the architectures. Each method was trained and tested, and the entire models were able to accurately detect the term and output it. Feature extraction is a crucial step in analyzing data and identifying relationships between various objects, such as audio signals. Models are unable to recognize utterances directly, which is why speech signals need to be transformed into an understandable format. Additionally, features must be extracted from the raw waveform to minimize variability in speech signals and produce input that is appropriate for modeling. This study utilizes MFCC, which is the most dominant method for extracting key attributes [16]. The MFCC of a signal is a set of features that represents the entire shape of a spectral envelope. ## 5 Neural Network Models In our conference paper we used three deep neural network models. Our deep CNN model consisted of four layers, each containing 2D convolutional layers followed by max-pooling layers except for the last layer, which uses global-average-pooling. Subsequently, it has a fully connected layer in the last as an output layer. The input of the first convolution layer is a 2D tensor of 40174 (representing MFCC features) which is convolved with 16 convolutional kernels. The next convolution layer is convolved with 32 filters. We doubled the number of kernels in the subsequent layers. The output of the preceding layer is the input of the succeeding layer. Every convolutional filter generates a feature map based on input. As an example, the first layer generates 16 feature maps while the last layer provides 128 feature maps. Each convolution layer convolves with 2 filters. We used the linear activation function to overcome the gradient vanishing problem [8]. We used ReLU, which is a well-known activation function in this study. In the next step, max pooling is applied to the features to reduce the sampling of the feature map by dividing the feature map into a small rectangular area, usually called the window, which does not overlap with each other (Ide & Kurita, 2017). Studies have shown that max pooling is more effective than other pooling types (Ide & Kurita, 2017). Hence, we used max pooling with a pool size of 2 instead of other pooling layers for the first three blocks. Usually, the performance of a model is affected by noise samples during training. Therefore, dropout is applied to address this problem and prevent a neural network from overfitting [13]. We tried various probability values (in the range of 0 and 2.5) for dropout. As a result, the most effective probability value is 2.0, which enhances the accuracy and minimizes the loss due to the limitation of our dataset for each term. The output size is the same as the total number of classes, which is 20 classes each related to a certain word. We examined the impact of different epochs and batch size on this model during training. Consequently, the optimal result was obtained with 80 epochs with a batch size of 84. The MLP model consists of two hidden layers each is followed by the ReLU activation function. Every single layer, except the output layer, has 300 neurons. Afterwards, a dropout of 2.0 is applied to decrease the likelihood of overfitting on the training data by randomly excluding nodes from each cycle. The input shape for this model is a one-dimensional feature vector that contains 40 features. The output layer contains 20 nodes, each representing a class, and is followed by the SoftMax activation function, like in the CNN model. The goal behind selecting SoftMax as the activation function for the last layer is that it sums up the output to 1. Finally, the model was trained with a batch size of 32 and a total number of 108,620 parameters. The structure of the implemented RNN model is a sequential model. It consists of one Long Short-Term Memory (LSTM) architecture block with a size of 64, which is followed by a dropout layer with a rate of 0.2. The LSTM model uses the Tanh activation function to activate the cell mode and the sigmoid activation function for the node output. The next layer is the Flatten layer, which collapses the spatial dimensions of the input into the channel dimension. The output of the Flatten layer is then passed to the output layer, which consists of 20 nodes. Finally, the output layer is followed by the SoftMax activation function, similar to the MLP and CNN models. The best performance was obtained using 100 epochs with a batch size of 64. We trained each model with several different epochs and batch sizes, and the most appropriate value that resulted in the highest accuracy was selected. ## 6 Hybrid Models We implemented two distinct hybrid models to evaluate the performance of hybrid deep learning techniques in our Dari ASR system. The first model is composed of four 2-dimensional convolution hidden layers followed by a single LSTM hidden layer. Additionally, it contains a fully connected layer and an output layer at the end. The second hybrid model is like the first one. However, we substitute the LSTM with the Bidirectional LSTM technique and flattened the output of LSTM before feeding it to the output layer. Each hidden layer is followed by the ReLU activation function. Additionally, a 2D max-pooling layer is associated after every convolutional 2D layer like the deep CNN structure. The capacity of LSTM is 64 in both models. We manually fine-tuned the number of layers, dropout rate, epochs, and batch-size to reach the optimal value. As a result, a 20% dropout is implied after a max-pooling layer. Consequently, the models were separately trained in 100 epochs using a batch size of 64 to achieve the best result. ## 7 Results and Experiments We have developed an ASR system for Dari language to recognize 20 isolated words. To build the system, we used a dataset from our previous work which contains 50 utterances for each word. We applied MFCC for feature extraction and used MLP, CNN, LSTM, CNN\(+\)LSTM, and CNN\(+\)BLSTM for speech classification. The models were trained and evaluated using 10-fold cross-validation, categorical cross-entropy as loss function, accuracy as metric, and Adam optimizer. The implementation was done in Python 3.6.9 on a Windows 10 computer with an Intel core i7(TM) 2.80 GHz 2.90 GHz processor. Feature extraction was performed using Librosa version 0.8.0, while TensorFlow version 2.3.1 and Keras version 2.4.3 were used for implementing the models. Table 1 shows the average accuracy and loss during training and testing for each model. The CNN\(+\)BLSTM model achieved the highest average testing accuracy of 98.365%, followed by CNN\(+\)LSTM and CNN with 98.184% and 98.037%, respectively. Although combining CNN with LSTM or BLSTM had minimal effects on accuracy and loss, it increased the complexity and runtime of the approach. Therefore, we recommend using the CNN\(+\)BLSTM approach for minor improvements in accuracy or loss dimensions. LSTM, although suitable for many NLP applications due to its nonlinear nature, showed comparable average testing accuracy to MLP due to the small size and simplicity of the dataset. However, MLP had the highest average testing loss value among all models. Tables 2-6 show the sum of 10 confusion matrices for each model. The LSTM model had the lowest average training loss and highest training accuracy but lower average testing accuracy. CNN had a maximum accuracy of 99.63% in fold 8, but this dropped to 95.63% in fold 6. The CNN\(+\)BLSTM model achieved a maximum accuracy of 99.27% in folds 4, 6, and 7, and a minimum accuracy of 97.45% in fold 9. Training and testing the MLP model for one fold took about 11 seconds, while it took about 2 minutes and 32 seconds to train and test the LSTM architecture for 100 iterations. The CNN model took much longer (approximately 14 minutes and 12 seconds for 80 cycles) due to its complexity. The execution time of hybrid models CNN\(+\)LSTM and CNN\(+\)BLSTM was even longer, with approximately 19 minutes and 15 minutes and 45 seconds spent on training and testing the models in 100 epochs, respectively. In this study, we aimed to identify the best performing models by modifying various parameters such as dense layers, pooling layers, dropouts, testing and training iterations, batch size, and kernel size. Based on the results obtained, we selected the optimal options for our research. Among the five models evaluated, the CNN\(+\)BLSTM model achieved the most impressive average accuracy results of 99.98% during training and 98.365% during testing, surpassing even the LSTM model which showed better average accuracy during training (100%) but lower accuracy during testing, as shown in table 1. Our research outperformed previous studies on Dari isolated word speech recognition. For instance, Dawodi et al. [3] employed MFCC and CNN to develop a Dari ASR for isolated words, but our model achieved over 10% higher accuracy on the test data. Moreover, our methods also exhibited lower evaluation loss and higher accuracy compared to other recent ASR studies. Table 7 presents a comparison of our research with some of the latest and innovative ASR systems. ## 8 Conclusions This paper is an extension of our conference paper and presents a study on the effectiveness of five deep neural network models, namely MLP, CNN, RNN, CNN\(+\)LSTM, and CNN\(\_\)BLTSM, for recognizing isolated Dari words through audio signal features extracted using MFCC. The results showed that CNN and hybrid CNN models outperformed the other models in Dari ASR. While this study provides a good starting point for Dari natural language processing, further research is needed to develop more accurate and continuous Dari ASR models using hybrid models and novel techniques. However, this would require a larger and more diverse dataset. \begin{table} \begin{tabular}{|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|p{42.7pt}|} \hline **Model** & **Average Training Accuracy** & **Average Training Loss** & **Average Testing Accuracy** & **Average Testing Loss** \\ \hline **MLP** & 99.73 & 0.028 & 97.02 & 0.129 \\ \hline **CNN** & 99.93 & 0.018 & 98.037 & 0.105 \\ \hline **LSTM** & 100 & 0 & 97.712 & 0.09 \\ \hline **CNN+ LSTM** & 99.95 & 0.006 & 98.184 & 0.079 \\ \hline **CNN+ BLSTM** & 99.98 & 0.006 & 98.365 & 0.07 \\ \hline \end{tabular} \end{table} Table 1: Training and testing accuracy and loss \begin{table} \begin{tabular}{|p{42.7pt}|p{42. \begin{table} \begin{tabular}{l \begin{table} \begin{tabular}{| \begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline & Temporal Classification (CTC) \(+\)CNN & & & set \\ \hline [8] & Hybrid convolutional neural network (HCNN) & English & English isolated word & \\ [8] & Hybrid convolutional neural network (HCNN) & English & speech corpus along with TVs & - \\ \hline [15] & MFCC+ deep belief network (DBN) LSTM, BLSTM, DLSTM, DLSTM, DBLSTM, and DBLSTM & Persian & Farsdat & HMM: 75.2\%, LSTM: 77\% \\ \hline [15] & MFCC, CNN, LSTM, MLP, CNN+LSTM & Dari & Our built in Dari speech dataset & Average accuracies: CNN: 98.037 \%, LSTM: 97.712 \%, MLP: 97.02\% \\ \hline \end{tabular}
2305.16625
Set-based Neural Network Encoding
We propose an approach to neural network weight encoding for generalization performance prediction that utilizes set-to-set and set-to-vector functions to efficiently encode neural network parameters. Our approach is capable of encoding neural networks in a modelzoo of mixed architecture and different parameter sizes as opposed to previous approaches that require custom encoding models for different architectures. Furthermore, our \textbf{S}et-based \textbf{N}eural network \textbf{E}ncoder (SNE) takes into consideration the hierarchical computational structure of neural networks by utilizing a layer-wise encoding scheme that culminates to encoding all layer-wise encodings to obtain the neural network encoding vector. Additionally, we introduce a \textit{pad-chunk-encode} pipeline to efficiently encode neural network layers that is adjustable to computational and memory constraints. We also introduce two new tasks for neural network generalization performance prediction: cross-dataset and cross-architecture. In cross-dataset performance prediction, we evaluate how well performance predictors generalize across modelzoos trained on different datasets but of the same architecture. In cross-architecture performance prediction, we evaluate how well generalization performance predictors transfer to modelzoos of different architecture. Experimentally, we show that SNE outperforms the relevant baselines on the cross-dataset task and provide the first set of results on the cross-architecture task.
Bruno Andreis, Soro Bedionita, Sung Ju Hwang
2023-05-26T04:34:28Z
http://arxiv.org/abs/2305.16625v1
# Set-based Neural Network Encoding ###### Abstract We propose an approach to neural network weight encoding for generalization performance prediction that utilizes set-to-set and set-to-vector functions to efficiently encode neural network parameters. Our approach is capable of encoding neural networks in a modelzoo of mixed architecture and different parameter sizes as opposed to previous approaches that require custom encoding models for different architectures. Furthermore, our **Set**-based **Ne**ural network **E**ncoder (SNE) takes into consideration the hierarchical computational structure of neural networks by utilizing a layer-wise encoding scheme that culminates to encoding all layer-wise encodings to obtain the neural network encoding vector. Additionally, we introduce a _pad-chunk-encode_ pipeline to efficiently encode neural network layers that is adjustable to computational and memory constraints. We also introduce two new tasks for neural network generalization performance prediction: cross-dataset and cross-architecture. In cross-dataset performance prediction, we evaluate how well performance predictors generalize across modelzoos trained on different datasets but of the same architecture. In cross-architecture performance prediction, we evaluate how well generalization performance predictors transfer to modelzoos of different architecture. Experimentally, we show that SNE outperforms the relevant baselines on the cross-dataset task and provide the first set of results on the cross-architecture task. ## 1 Introduction Recently, deep learning methods have been applied to a wide range of fields and problems. With this broad range of applications, large amounts of datasets are continually being made available in the public domain together with neural networks trained on these datasets. Given this abundance of trained neural network models, the following curiosity arises: what can we deduce about these networks with access only to the parameter values? More generally, can we predict properties of these networks such as generalization performance on a testset, the dataset on which the model was trained, the choice of optimizer and learning rate, the number of training epochs, choice of model initialization etc. through an analysis of the model parameters? The ability to infer such fundamental properties of trained neural networks using only the parameter values has the potential to open up new application and research paradigms. In this work, we tackle a specific version of this problem, namely, that of predicting the generalization performance on a testset of a neural network given access, only to the parameter values at the end of the training process. The first approach to solving this problem, proposed by Unterthiner et al. (2020), involves computing statistics such as the mean, standard deviation and quantiles, of each layer in the network, concatenating them to a single vector that represents the neural network encoding, and using this vector to predict the performance of the network. Another approach, also proposed as a baseline in Unterthiner et al. (2020), involves flattening all the parameter values of the network into a single vector which is then fed as input to layers of multilayer perceptrons(MLPs) to predict the network's performance. An immediate consequence of this approach is that it is practical only for moderately sized neural network architectures. Additionally, this approach ignores the hierarchical computational structure of neural networks through the weight vectorization process. The second, and most recent approach to this problem, proposed by Zhou et al. (2023), takes a geometric approach to the problem by building neural network weight encoding functions, termed neural functionals, that respect symmetric properties of permutation invariance and equivariance of the hidden layers of multilayer perceptrons under the action of an appropriately applied permutation group. While this approach respect these fundamental properties in the parameter space, it's application is restricted, strictly, to multilayer perceptrons. Also, even when relaxations are made to extend this method to convolutional networks and combinations of convolutional layers and multilayer perceptrons, these only work under strict conditions of equivalence in the channel size in the last convolutional layer and the first linear layer. Hence it is clear that while the method proposed by Zhou et al. (2023) enjoys nice theoretical properties, its application is limited to only a small subset of carefully chosen architectures. Moreover, both approaches (Unterthiner et al., 2020; Zhou et al., 2023) have a fundamental limitation: their encoding methods are applicable only to a single fixed, pre chosen neural network architecture. Once the performance predictor is trained, in the case of Unterthiner et al. (2020), and the neural network encoder of Zhou et al. (2023) is defined, they cannot be used to predict the performance of neural networks of different architecture. Consequently, evaluating these models on diverse architectures is infeasible without training an new generalization performance predictor for each architecture. To this end, we propose a Set-based Neural Network Encoder (SNE) for predicting the performance of neural networks given only the model parameters that is agnostic to the network architecture. Specifically, we treat the neural network encoding problem from a set encoding perspective by utilising compositions of _set-to-set_ and _set-to-vector_ functions. However, the parameters of neural networks are ordered. To retain this order information, we utilize positional encoding (Vaswani et al. (2017) at various stages in our model. Also, our model incorporates the hierarchical computational structure of neural networks in the encoder design by encoding independently, layer-wise, culminating in a final encoding stage that compresses all the layer-wise information into a single encoding vector used to predict the network performance. To handle the issue of large and variable parameter sizes efficiently, we incorporate a _pad-chunk-encode_ pipeline that is parallelizable and can be used to iteratively encode layer parameters. In terms of evaluation, we introduce two new tasks: cross-dataset neural network performance prediction and cross-architecture neural network performance prediction. In cross-dataset neural network performance prediction, we fix the neural network architecture used to generate the training data and evaluate how well the performance predictors transfer to the same architecture trained on different datasets. For cross-architecture neural network performance prediction, we fix only the architecture for generating the training data and evaluate the performance of the predictors on architectures unseen during training. Our contributions are as follows: Figure 1: **Legend:**: Padding, **: Set-to-Set Function, **: Set-to-Vector Function, **: Layer-Level Encoder, **: Layer-Type Encoder. **Concept:** _(left)_ Given the weights of a layer, SNE begins by padding and chunking the weights into _chunksizes_. Each chunk of the layer weight goes through a series of set-to-set and set-to-vector functions to obtain the chunk representation vector. Layer _level_ and layer _type_ positional encodings are used to inject structural information of the network at each stage of the chunk encoding process. All chunk encoding vectors are encoded together to obtain the layer encoding. _(right)_ All layer encodings in the neural network are encoded to obtain the neural network encoding vector again using as series of set-to-set and set-to-vector functions. This vector is then used to predict the generalization performance of the neural network. * We develop a Set-based Neural Network Encoder (SNE) for predicting the performance of neural networks given access only to parameter values that is capable of encoding neural networks of arbitrary architecture and takes into account the hierarchical computational structure of neural networks. * We introduce the cross-dataset neural network performance prediction task where we evaluate how well neural network performance predictors transfer across neural networks trained on different datasets. * We introduce the cross-architecture neural network performance prediction task where we evaluate how well neural network performance predictors trained on a specific architecture transfer to unseen architectures during training. * We benchmark our method, SNE, against the relevant baselines on the cross-dataset task and show significant improvement over the baselines. * Finally, we provide the first set of results on the cross-architecture task using our set-based neural network encoder, SNE. ## 2 Related Work **Set Functions:** Neural networks that operate on set structured data have recently been used in many applications ranging from point cloud classification to set generation (Kim et al., 2021). Set functions are required to respect symmetric properties such as permutation invariance and equivariance. In DeepSets (Zaheer et al., 2017), a set of sum-decomposable functions are introduced that are equivariant in the Set-to-Set applications and invariant in the Set-to-Vector applications. In Set Transformers (Lee et al., 2019), a class of attention based Set-to-Set and Set-to-Vector functions are introduced that are more expressive and capable of modeling pairwise and higher order interactions between set elements. Recent works such as Bruno et al. (2021) and Willette et al. (2023) deal with the problem of processing sets of large cardinality in the the limited memory/computational budget regime. In this work, we utilize the class of set functions developed in Lee et al. (2019) to develop a neural network encoder for performance prediction that is agnostic to specific architectural choices. Our set-based formulation allows us to build such an encoder, capable of handling neural networks weights of arbitrary parameter sizes. This is different from recent approaches to neural network encoding for performance prediction that can encode only parameters of a single architecture. **Neural Network Performance Prediction From Weights:** Predicting the performance of neural networks given access only to the trained parameters is a relatively new topic of research introduced by Unterthiner et al. (2020). In Unterthiner et al. (2020), two methods are proposed for predicting the generalization performance of neural networks: the first involves flattening the weights of the network into a single vector, processing it using multiple layers of MLPs to obtain an encoding vector which is then used to predict the performance. The second involves computing the statistics of each layer in the network, such as mean, variance, quantiles etc., concatenating them into a single vector that is then used for predicting the performance of the network. The most recent approach that we are aware of, Zhou et al. (2023), proposes a neural network weight encoder that is invariant or equivariant, depending on the application, to an appropriately applied permutation group to the hidden layers of MLPs. Two variants of their model is provided: one which operates only on the hidden layers, and conforms strictly to the theory of permuting MLP hidden neurons (Hecht-Nielsen, 1990), and a relaxation that assumes that the neurons of both the input and output layers of MLPs are permutable. Additionally, extensions are provided for convolutional layers. Our approach, SNE, is directly comparable to these methods for the neural network performance prediction task. However, unlike the methods of Unterthiner et al. (2020) and Zhou et al. (2023) which operate only on neural networks of fixed architecture, and consequently fixed number of parameters, SNE is capable of encoding networks of arbitrary architecture. Moreover, SNE utilizes the hierarchical computation structure of neural networks by encoding, iteratively or in parallel, from the input to the output layers. Furthermore, we go further than the experimental evaluation in Unterthiner et al. (2020) and Zhou et al. (2023) by introducing two new tasks: cross-dataset and cross-architecture neural network performance prediction. Unterthiner et al. (2020) and Zhou et al. (2023) can only be benchmarked on the cross-dataset task where all networks in the modelzoos are of the same architecture. Their restriction to a single fixed architecture makes cross-architecture evaluation impossible. Our method, SNE, on the other hand can be used for both tasks. ## 3 Set-based Neural Network Encoding ### Preliminaries We have access to a dataset \(D=\{(x_{1},y_{2}),\ldots,(x_{n},y_{n})\}\) where for each \((x_{i},y_{i})\) pair, \(x_{i}\) represents the weights of a neural network architecture \(a\), sampled from a set of architectures \(\mathcal{A}\) and \(y_{i}\) corresponds to some property of \(x_{i}\) after it has been trained on a specific dataset \(d\). \(y_{i}\) can be properties such as generalization gap, training loss, the learning rate used to train \(x_{i}\), or even the number of epochs, choice of weight initialization, and optimizer used to train \(x_{i}\). Henceforth, we refer to \(D\) as a _modelzoo_. For each \(x_{i}\in D\), \(x_{i}=[w_{i}^{0},\ldots,w_{i}^{|x_{i}|}]\) where \(w_{i}^{j}\) represents the weights (parameters) of the \(jth\) layer of the neural network \(x_{i}\), and \(|x_{i}|\) is the total number of layers in \(x_{i}\). Consequently, \(w_{i}^{0}\) and \(w_{i}^{|x_{i}|}\) are the input and output layers of \(x_{i}\) respectively. Additionally, we introduce the \(\texttt{Flatten}:x_{i}\rightarrow\mathbf{R}^{d_{i}}\) operation, that takes as input the weights of a neural network and returns the flattened weights and \(d_{i}\) is the total number of parameter is \(x_{i}\). The neural network encoding problem is defined such that, we seek to compress \(x_{i}\in\mathbf{R}^{d_{i}}\) to a compact representation \(z_{x_{i}}\in\mathbf{R}^{h}\) such that \(z_{x_{i}}\) can be used to predict the properties \(y_{i}\) of \(x_{i}\) with \(h\ll d_{i}\). In what follows, we present the details of our Set-Based Neural Network Encoding (SNE) method capable of encoding the weights of neural networks of arbitrary architecture that takes into account the hierarchical computational structure of the given architecture and with efficient methods for processing weights of high dimension. ### Handling High Dimensional Layer Weights via Chunking For a given layer \(w_{i}^{j}\in x_{i}\), the dimension of \(w_{i}^{j}\), \(|w_{i}^{j}|\) can be very large. For instance, when considering linear layers, flattening the weights can results in a tensor that can require large compute memory to be processable by another neural network. To resolve this issue, we resort to _chunking_. Specifically, for all layers \(w_{i}^{j}\in x_{i}\), we perform the following operations: \[\hat{w}_{i}^{j}=\texttt{Chunk}(\texttt{Pad}(\texttt{Flatten}(w_{i}^{j}),c),c) =\{w_{i}^{j_{0}},\ldots,w_{i}^{j_{q}}\}, \tag{1}\] where for any \(w_{i}^{j_{t}}\in\hat{w}_{i}^{j}\), \(|w_{i}^{j_{t}}|\in\mathbf{R}^{c}\). Here, \(c\) is the _chunkinge_, fixed for all layer types in the neural network and \(t\in[0,\ldots,q]\). The padding operation \(\texttt{Pad}(w_{i}^{j},c)\) appends zeros, if required, to extend \(w_{i}^{j}\) and make its dimension a multiple of the chunksize \(c\). To distinguish padded values from actual weight values, each element of \(\hat{w}_{i}^{j}\) has a corresponding set of masks \(\hat{m}_{i}^{j}=\{m_{i}^{j_{0}},\ldots,m_{i}^{j_{q}}\}\). Note that with this padding and subsequent chunking operation, each element in \(\hat{w}_{i}^{j}\) is now small enough, for an appropriately chosen chunksize \(c\), to be processed. Moreover, all the elements in \(\hat{w}_{i}^{j}\) can be processed in parallel. The modelzoos we consider in the experimental section are populated by neural networks with stacks of convolutional and linear layers. For each such layer, we apply the padding and chunking operation differently. For a linear layer \(w_{i}^{j}\in\mathbf{R}^{\texttt{out}\times\texttt{in}}\), where out and in are the input and output dimensions respectively, we apply the flattening operation on both dimensions followed by padding and chunking. However for a convolutional layer \(w_{i}^{j}\in\mathbf{R}^{\texttt{out}\times\texttt{in}\times\texttt{k}}\), we apply the flattening, padding, and chunking operations only to the kernel dimensions k. Finally we note that for layers with bias values, we apply the procedure detailed above independently to both the weights and biases. ### Independent Chunk Encoding The next stage in our Set-based Neural Network encoding pipeline involves encoding, independently, each chunk of weight in \(\hat{w}_{i}^{j}=\{w_{i}^{j_{0}},\ldots,w_{i}^{j_{k}}\}\). For each \(w_{i}^{j_{t}}\in\hat{w}_{i}^{j}\), we treat the \(c\) elements as members of a set. However, it is clear that \(w_{i}^{j_{t}}\) has order in its sequence, _i.e._, an ordered set. We remedy this by providing this order information via positional encoding. Concretely, for a given \(w_{i}^{j_{t}}\in\mathbf{R}^{c\times 1}\), we first model the pairwise relations between all \(c\) elements using a _set-to-set_ function \(\Phi_{\theta_{1}}\) to obtain: \[\hat{w}_{i}^{j_{t}}=\Phi_{\theta_{1}}(w_{i}^{j_{t}})\in\mathbf{R}^{c\times h}. \tag{2}\] That is, \(\Phi_{\theta_{1}}\) captures pair-wise correlations in \(w_{i}^{j_{t}}\) and projects all elements (weight values) to a new dimension \(h\). Given \(\hat{w}_{i}^{j_{t}}\in\mathcal{R}^{c\times h}\), we inject two kinds of positionally encoded information. The first encodes the _layer type_ in a list of layers, _i.e_., linear or convolution for the modelzoos we experiment with, to obtain: \[\hat{w}_{i}^{j_{t}}=\text{PosEnc}_{Layer}^{Type}(\hat{w}_{i}^{j_{t}})\in\mathbf{ R}^{c\times h}. \tag{3}\] Here we abuse notation and assign the output of \(\text{PosEnc}(\cdot)\) to \(\hat{w}_{i}^{i_{t}}\) to convey the fact that \(\hat{w}_{i}^{j_{t}}\)'s are modified in place and to simplify the notation. Also, all \(\text{PosEnc}(\cdot)\)s are variants of the positional encoding method introduced in Vaswani et al. (2017). Next we inject the layer level information. Since neural networks are computationally hierarchical, starting from the input to the output layer, we include this information to distinguish chunks, \(w_{i}^{j_{t}}\)'s from different layers. Specifically, we compute: \[\hat{w}_{i}^{j_{t}}=\text{PosEnc}_{Layer}^{Level}(\hat{w}_{i}^{j_{t}})\in \mathbf{R}^{c\times h}, \tag{4}\] where the input to \(\text{PosEnc}_{Layer}^{Level}(\cdot)\) is the output of Equation 3. We note that this approach is different from previous neural network encoding methods Unterthiner et al. (2020) that loose the layer/type information by directly encoding the entire flattened weights hence disregarding the hierarchical computational structure of neural networks. Experimentally, we find that injecting such positionally encoded information improves the models performance. We further model pairwise correlations in \(\hat{w}_{i}^{j_{t}}\), now infused with layer/type information, using another set-to-set function \(\Phi_{\theta_{2}}\): \[\hat{w}_{i}^{j_{t}}=\Phi_{\theta_{2}}(w_{i}^{j_{t}})\in\mathbf{R}^{c\times h}. \tag{5}\] The final step in the chunk encoding pipeline involves compressing all \(c\) elements in \(\hat{w}_{i}^{j_{t}}\) to a compact representation. For this, we use a _set-to-vector_ function \(\Gamma_{\theta_{a}}:\mathbf{R}^{c\times h}\rightarrow\mathbf{R}^{h}\). In summary, the chunk encoding layer computes the following function: \[\hat{w}_{i}^{j_{t}}=\Gamma_{\theta_{a}}[\Phi_{\theta_{2}}(\text{PosEnc}_{Layer }^{Level}(\text{PosEnc}_{Layer}^{Type}(\Phi_{\theta_{1}}(w_{i}^{j_{t}}))))] \in\mathbf{R}^{1\times H}. \tag{6}\] Note now that for each chunked layer \(\hat{w}_{i}^{j}=\{w_{i}^{j_{0}},\dots,w_{i}^{j_{t}}\}\), the chunk encoder, Equation 6, produces a new set \(\hat{w}_{i}^{j}=\texttt{Concatenate}[\{\hat{w}_{i}^{j_{0}},\dots,\hat{w}_{i} ^{j_{t}}\}]\in\mathbf{R}^{q\times h}\), which represents all the encodings of all chunks in a layer. **Remark** Our usage of set functions \(\Phi_{\theta_{1}},\Phi_{\theta_{2}}\) and \(\Gamma_{\theta_{a}}\) allows us to process layers of arbitrary sizes. This in turn allows us to process neural networks of arbitrary architecture using a single model, a property lacking in previous approaches to neural network encoding for generalization performance prediction Zhou et al. (2023); Unterthiner et al. (2020). ### Layer Encoding At this point, we have encoded all the chunked parameters of a given layer to obtain \(\tilde{w}_{i}^{j}\). Encoding a layer, \(w_{i}^{j}\), then involves defining a function \(\Gamma_{\theta_{j}}:\mathbf{R}^{q\times h}\rightarrow\mathbf{R}^{1\times h}\) for arbitrary \(q\). In practice, this is done by computing: \[\mathbf{w}_{i}^{j}=\Gamma_{\theta_{\beta}}[\text{PosEnc}_{Layer}^{Level}( \Phi_{\theta_{3}}(\tilde{w}_{i}^{j}))]\in\mathbf{R}^{1\times h}. \tag{7}\] Again we have injected the layer level information, via positional encoding, into the encoding processed by the set-to-set function \(\Phi_{\theta_{3}}\). We then collect all the layer level encoding of the neural network \(x_{i}\): \[\tilde{w}_{i}=\texttt{Concatenate}[\mathbf{w}_{i}^{0},\dots,\mathbf{w}_{i}^{ |x_{i}|}]\in\mathbf{R}^{|x_{i}|\times h}. \tag{8}\] ### Neural Network Encoding With all layers in \(x_{i}\) encoded, we compute the neural network encoding vector \(z_{x_{i}}\) as follows: \[z_{x_{i}}=\Gamma_{\theta_{\beta}}[\Phi_{\theta_{4}}(\text{PosEnc}_{Layer}^{Level }(\tilde{w}_{i}))]\in\mathbf{R}^{h}. \tag{9}\] \(z_{x_{i}}\) compresses all the layer-wise information into a compact representation for the downstream task. Since \(\Gamma_{\theta_{\gamma}}\) is agnostic to the number of layers \(|x_{i}|\) of network \(x_{i}\), the encoding mechanism can handle networks of arbitrary layers and by extension architecture. Similar to the layer encoding pipeline, we again re-inject the layer-level information through positional encoding before compressing with \(\Gamma_{\theta_{\gamma}}\). Henceforth, we refer to the entire neural network encoding pipeline detailed so far as \(\text{SNE}_{\Theta}(x_{i})\) for a network \(x_{i}\), where \(\Theta\) encapsulates all the model parameters, \(\Phi_{\theta_{1-4}},\Gamma_{\alpha},\Gamma_{\beta}\) and \(\Gamma_{\gamma}\). ### Choice of Set-to-Set and Set-to-Vector Functions Now, we specify the choice of Set-to-Set and Set-to-Vector functions encapsulated by \(\Phi_{\theta_{1-4}},\Gamma_{\alpha},\Gamma_{\beta}\) and \(\Gamma_{\gamma}\) that are used to implement SNE. Let \(X\in\mathbf{R}^{n_{X}\times d}\) and \(Y\in\mathbf{R}^{n_{Y}\times d}\) be arbitrary sets where \(n_{X}=|X|\), \(n_{Y}=|Y|\) and \(d\) (note the abuse of notation from Section 3.1 where \(d\) is a dataset) is the dimension of an element in both \(X\) and \(Y\). The MultiHead Attention Block (MAB) with parameter \(\omega\) is given by: \[\text{MAB}(X,Y;\omega)=\text{LayerNorm}(H+\text{rFF}(H)),\quad\text{where} \tag{10}\] \[H=\text{LayerNorm}(X+\text{MultiHead}(X,Y,Y;\omega). \tag{11}\] Here, LayerNorm and rFF are Layer Normalization (Ba et al., 2016) and row-wise feedforward layers respectively. MultiHead\((X,Y,Y;\omega)\) is the multihead attention layer of Vaswani et al. (2017). The Set Attention Block (Lee et al., 2019), SAB, is given by: \[\text{SAB}(X):=\text{MAB}(X,X). \tag{12}\] That is, SAB computes attention between set elements and models pairwise interactions and hence is a Set-to-Set function. Finally, the Pooling MultiHead Attention Block (Lee et al., 2019), PMA\({}_{k}\), is given by: \[\text{PMA}_{k}(X)=\text{MAB}(S,\text{rFF}(X)),\quad\text{where} \tag{13}\] \(S\in\mathbf{R}^{k\times d}\) and \(X\in\mathbf{R}^{n_{X}\times d}\). The \(k\) elements of \(S\) are termed _seed vectors_ and when \(k=1\), as is in all our experiments, PMA\({}_{k}\) pools a set of size \(n_{X}\) to a single vector making it a Set-to-Vector function. All parameters encapsulated by \(\Phi_{\theta_{1-4}}\) are implemented as a stack of two SAB modules: SAB\((\text{SAB}(X))\). Stacking SAB modules enables us not only to model pairwise interactions but also higher order interactions between set elements. Finally, all of \(\Gamma_{\alpha},\Gamma_{\beta}\) and \(\Gamma_{\gamma}\) are implemented as a single PMA module with \(k=1\). ### Downstream Task Given \((z_{x_{i}},y_{i})\), we train a predictor \(f_{\theta}(z_{x_{i}})\) to estimate properties of the network \(x_{i}\). In this work, we focus solely on the task of predicting the generalization performance of \(x_{i}\), where \(y_{i}\) is the performance on the test set of the dataset used to train \(x_{i}\). The parameters of the predictor \(f_{\theta}\) and all the parameters in the neural network encoding pipeline, \(\Theta\), are jointly optimized. In particular, we minimize the error between \(f_{\theta}(z_{x_{i}})\) and \(y_{i}\). For the entire modelzo, the objective is given as: \[\operatorname*{minimize}_{\Theta,\theta}\sum_{i=1}^{d}\ell[f_{\theta}(\text{ SNE}_{\Theta}(x_{i})),y_{i}], \tag{14}\] for an appropriately chosen loss function \(\ell(\cdot)\). In our experiments, \(\ell(\cdot)\) is the binary cross entropy loss. The entire SNE pipeline is shown in Figure 1. ## 4 Experiments We now present experimental results on the cross-dataest and cross-architecture neural network performance prediction tasks. Details of experimental settings, hyperparameters, model specification etc. can be found in the Appendix. ### Cross-Dataset Neural Network Performance Prediction For this task, we train neural network performance predictors on 4 homogeneous modelzoos, of the same architecture, with each modelzoo specialized to a single dataset. **Datasets and Neural Network Architecture:** Each modelzoo is trained on one of the following datasets: MNIST (Deng, 2012), FashionMNIST (Xiao et al., 2017), CIFAR10 (Krizhevsky, 2009) and SVHN (Netzer et al., 2018). We use the modelzoos provided by Unterthiner et al. (2020). To create each modelzoo, 30K different hyperparameter configurations were sampled. The hyperparameters include the learning rate, regularization coefficient, dropout rate, the variance and choice of initialization, activation functions etc. A thorough description of the modelzoo generation process can be found in Appendix A.2 of Unterthiner et al. (2020). The single architecture used to general the modelzoos consists of 3 convolutional layers each with 16 filters, a global average pooling layer and linear classification layer. Each modelzoo is split into a training, testing and validation splits. **Task:** In this task, we consider cross-dataset neural network performance prediction where we evaluate the prediction performance on the testset of the modelzo on which the predictors were trained on. Additionally, we evaluate how well each predictor transfers to the other modelzoos. To the best of our knowledge, this is the first such empirical analysis of how well neural network performance \begin{table} \begin{tabular}{l c c c c c} \hline \hline & MLP & STATNN & NFNNP & NFNNP & SNE(ours) \\ \hline MNIST\(\rightarrow\) MNIST & 0.878\(\pm 0.001\) & 0.926\(\pm 0.000\) & 0.937\(\pm 0.000\) & **0.942\(\pm 0.001\)** & 0.941\(\pm 0.000\) \\ MNIST \(\rightarrow\) FashionMNIST & 0.486\(\pm 0.019\) & 0.756\(\pm 0.006\) & 0.726\(\pm 0.005\) & 0.690\(\pm 0.008\) & **0.773\(\pm 0.009\)** \\ MNIST\(\rightarrow\) CIFAR10 & 0.562\(\pm 0.024\) & 0.773\(\pm 0.005\) & 0.756\(\pm 0.010\) & 0.758\(\pm 0.000\) & **0.792\(\pm 0.008\)** \\ MNIST\(\rightarrow\) SVHN & 0.544\(\pm 0.005\) & 0.698\(\pm 0.005\) & 0.702\(\pm 0.005\) & 0.710\(\pm 0.010\) & **0.721\(\pm 0.001\)** \\ \hline FashionMNIST\(\rightarrow\) FashionMNIST & 0.874\(\pm 0.001\) & 0.915\(\pm 0.000\) & 0.922\(\pm 0.001\) & **0.935\(\pm 0.000\)** & 0.928\(\pm 0.001\) \\ FashionMNIST\(\rightarrow\) MNIST & 0.507\(\pm 0.007\) & 0.667\(\pm 0.010\) & **0.755\(\pm 0.018\)** & 0.617\(\pm 0.012\) & 0.722\(\pm 0.005\) \\ FashionMNIST\(\rightarrow\) CIFAR10 & 0.515\(\pm 0.007\) & 0.698\(\pm 0.029\) & 0.733\(\pm 0.007\) & 0.695\(\pm 0.032\) & **0.745\(\pm 0.008\)** \\ FashionMNIST\(\rightarrow\) SVHN & 0.554\(\pm 0.006\) & 0.502\(\pm 0.043\) & 0.663\(\pm 0.014\) & 0.662\(\pm 0.003\) & **0.664\(\pm 0.003\)** \\ \hline CIFAR10\(\rightarrow\) CIFAR10 & 0.880\(\pm 0.000\) & 0.912\(\pm 0.001\) & 0.924\(\pm 0.002\) & **0.931\(\pm 0.000\)** & 0.927\(\pm 0.000\) \\ CIFAR10\(\rightarrow\) MNIST & 0.552\(\pm 0.003\) & 0.656\(\pm 0.005\) & **0.674\(\pm 0.018\)** & 0.600\(\pm 0.025\) & 0.648\(\pm 0.006\) \\ CIFAR10\(\rightarrow\) FashionMNIST & 0.514\(\pm 0.005\) & **0.677\(\pm 0.004\)** & 0.629\(\pm 0.031\) & 0.526\(\pm 0.038\) & 0.643\(\pm 0.006\) \\ CIFAR10\(\rightarrow\) SVHN & 0.578\(\pm 0.005\) & 0.728\(\pm 0.004\) & 0.697\(\pm 0.006\) & 0.662\(\pm 0.004\) & **0.753\(\pm 0.007\)** \\ \hline SVHN\(\rightarrow\) SVHN & 0.809\(\pm 0.003\) & 0.844\(\pm 0.000\) & 0.855\(\pm 0.001\) & **0.862\(\pm 0.002\)** & 0.858\(\pm 0.003\) \\ SVHN\(\rightarrow\) MNIST & 0.545\(\pm 0.025\) & 0.630\(\pm 0.009\) & **0.674\(\pm 0.008\)** & 0.647\(\pm 0.016\) & 0.647\(\pm 0.001\) \\ SVHN\(\rightarrow\) FashionMNIST & 0.523\(\pm 0.026\) & 0.616\(\pm 0.007\) & 0.567\(\pm 0.014\) & 0.494\(\pm 0.023\) & **0.655\(\pm 0.003\)** \\ SVHN\(\rightarrow\) CIFAR10 & 0.540\(\pm 0.027\) & 0.746\(\pm 0.002\) & 0.725\(\pm 0.007\) & 0.547\(\pm 0.039\) & **0.760\(\pm 0.006\)** \\ \hline Average & 0.616\(\pm 0.143\) & 0.734\(\pm 0.115\) & 0.746\(\pm 0.106\) & 0.705\(\pm 0.140\) & **0.761\(\pm 0.101\)** \\ \hline \hline \end{tabular} \end{table} Table 1: Cross-Dataset Neural Network Performance Prediction. We benchmark how well each method transfers across multiple datasets. In the first column, \(A\to B\) implies that a model trained on a homogeneous modelzo of dataset \(A\) is evaluated on a homogeneous modelzo of dataset \(B\). In the last row, we report the averaged performance of all methods across the cross-dataset task. For each row, the best model is shown in red and the second best in blue. Models are evaluated in terms of _Kendall’s_\(\tau\) coefficient. Figure 2: TSNE Visualization of Neural Network Encoding. We train neural network performance prediction methods on a combination of the MNIST, FashionMNIST, CIFAR10 and SVHN modelzoos of Unterthiner et al. (2020). We present 3 views of the resulting 3-D plots showing how neural networks from each modelzoo are embedded/encoded by the corresponding models. Zoom in for better viewing. predictors transfer to different datasets. We evaluate all models using _Kendall's_\(\tau\)[13], a rank correlation measure. **Baselines:** We compare SNE with the following baselines: * **MLP:** This model flattens all the weights and biases of each neural network into a single vector which is then fed to a stack of MLPs with ReLU activation and finally a Sigmoid activation function for predicting the performance of the neural network. * **STATNN**[14]**:** Given neural network weights, this model computes statistics of each layer, including their means, variance and quantiles, concatenates them into a single vector which is then used as input to a stack of MLPs with ReLU activation and a final layer with Sigmoid activation that outputs the performance of the neural network. * **NFN\({}_{\text{HNP}}\) and NFN\({}_{\text{NP}}\)[22]:** These two models termed Neural Functionals(NF), developed mainly for MLPs, are designed to be permutation invariant or equivariant to an appropriately applied permutation group to the hidden layers of the neural networks. HNP, hidden neural permutation, is applied only to the hidden layers of each network since the output and input layers of MLPs are not invariant/equivariant to the action of a permutation group on the neurons. NP, neural permutation, makes a strong assumption that both the input and output layers are also invariant/equivariant under the action of a permutation group. **Results:** We present the results of the cross-dataset neural network performance prediction task in Table 1. For each row in Table 1, the first column shows the cross-dataset evaluation direction. For instance, MNIST\(\rightarrow\)CIFAR10 implies that a model trained on a modelzoo of neural networks trained on MNIST is cross evaluated on a modelzo populated by neural networks trained on CIFAR10. We note that the A\(\rightarrow\)A setting, _e.g._ MNIST\(\rightarrow\)MNIST, corresponds to the evaluation settings of Unterthiner et al. (2020) and Zhou et al. (2023). Also in Table 1 we show the best model in red and the second best model in blue. As show in Table 1, SNE is always either the best model or the second best model in the cross-dataset task. Especially, SNE is particularly good in the A\(\rightarrow\)B performance prediction task compared to the next competitive baselines, NFN\({}_{\text{NP}}\) and NFN\({}_{\text{HNP}}\). The MLP baseline, as expected, performs the worse since concatenating all weight values in a single vector looses information such as the network structure. STATNN [14] performs relatively better than the MLP baseline suggesting that the statistics of each layer indeed captures enough information to do moderately well on the neural network performance prediction task. NFN\({}_{\text{NP}}\) and NFN\({}_{\text{HNP}}\) perform much better than STATNN and MLP and NFN\({}_{\text{HNP}}\) in particular shows good results in the A\(\rightarrow\)A setting. Interestingly, NFN\({}_{\text{NP}}\) is a much better cross-dataset performance prediction model than NFN\({}_{\text{HNP}}\). However, across the entire cross-dataset neural network performance prediction task, SNE significantly outperforms all the baselines as shown in the last row of Table 1. Also, as stated in Section 3.3, positional encoding of layer and level information provides performance boost. For instance, removing all such encoding from SNE reduces the performance on CIFAR10\(\rightarrow\)CIFAR10 from \(0.928\) to \(0.918\). **Qualitative Analysis:** To further understand how SNE transfers well across modelzoos, we generate TSNE [23] plots for the neural network encoding of all the compared models for all four homogeneous modelzoos in Figure 2. We provide 3 different views of each models embeddings to better illustrate the encoding pattern. In Figures 1(c) and 1(d), we observe that NFN\({}_{\text{NP}}\) and NFN\({}_{\text{HNP}}\) have very clear separation boundaries between the networks from each modelzoo. This may explain NFN\({}_{\text{HNP}}\)'s good performance in the A\(\rightarrow\)A cross-dataset neural network encoding task in Table 1. In Figures 1(a) and 1(b), MLP and STATNN, respectively show similar patterns with small continuous string of modelzoo specific groupings. However, these separations are not as defined as those of NFN\({}_{\text{NP}}\) and NFN\({}_{\text{HNP}}\). The embedding patter of SNE on the other hand is completely different. In Figure 1(e), all networks from all the modelzoos are embedded almost uniformly close to each other. This may suggest why SNE performs much better on the cross-dataset performance prediction task since it is much easier to interpolate between the neural network encodings generated by SNE across modelzoos. For this tasks, we train SNE on 3 homogeneous modelzoos of the same architecture and test it on 3 other homogeneous modelzoos of a different architecture than the modelzoo used for training. The cross-architecture task demonstrates SNE's agnosticism to particular architectural choices since training and testing are done on modelzoos of different architectures. **Datasets and Neural Network Architectures:** For cross architecture evaluation, we utilize 3 datasets: MNIST, CIFAR10 and SVHN. Since the modelzoos of Unterthiner et al. (2020) consist of a single architecture, we refer to them as \(\text{Arch}_{1}\). We use all modelzoos of \(\text{Arch}_{1}\) only for training the neural network performance predictors. We create another set of MNIST, CIFAR10 and SVHN modelzoos using a different architecture \(\text{Arch}_{2}\). \(\text{Arch}_{2}\) consists of 3 convolutional layers followed by two linear layers. Exact architectural specifications are detailed in the Appendix. We generate the modelzoos of \(\text{Arch}_{2}\) following the routine described in Appendix A.2 of Unterthiner et al. (2020). All modelzoos of \(\text{Arch}_{2}\) are used only for testing the cross-architecture neural network performance prediction of predictors trained on modelzoos of \(\text{Arch}_{1}\) and are _not_ used during training. **Task:** For this task, we seek to explore the following question: Does neural network performance predictors trained on modelzoos of \(\text{Arch}_{1}\) transfer or generalize on modelzoos of \(\text{Arch}_{2}\)? Also, we perform cross dataset evaluation on this task. However, this is different from the cross dataset evaluation in Section 4.1 since the cross evaluation is with respect to modelzoos of \(\text{Arch}_{2}\). **Baselines:** As we already remarked, none of the baselines used in the cross dataset task of Section 4.1 are applicable to this task. The MLP, STATNN, \(\text{NFN}_{\text{NP}}\) and \(\text{NFN}_{\text{HNP}}\) baselines all depend on the architecture of the modelzoo used for training and require modelzoos to be homogeneous architecturally. SNE on the other hand is agnostic to architectural choices and hence can be evaluated cross-architecture. Consequently, we provide the first set of results, to the best of our knowledge, on the cross-architecture neural network performance prediction task. **Results:** We report the quantitative evaluation on the cross-architecture task in Table 2. The first column, \(\text{Arch}_{1}\rightarrow\text{Arch}_{2}\) shows the direction of transfer, where we train using modelzoos of \(\text{Arch}_{1}\) and test on modelzoos of \(\text{Arch}_{2}\). Additionally, A\(\rightarrow\)B, _e.g._ MNIST\(\rightarrow\)CIFAR10 shows the cross-dataset transfer as in Section 4.1. However, the transfer is now across architectures. We evaluate SNE in terms of Kendall's \(\tau\). From Table 2, it can be seen that SNE transfers well across architectures. Interestingly, the SVHN modelzoo, the most challenging modelzoo in the cross-dataset task, shows very good transfer across architectures. Alluding to the qualitative analysis in Section 4.1, we infer that SNE transfers well across architectures due to it's spread out neural network encoding pattern that allows for much easier interpolation even across unseen architectures as shown in Table 2. ## 5 Conclusion In this work, we tackled the problem of predicting neural network generalization performance given access only to the trained parameter values. We presented a Set-based Neural Network Encoder (SNE) that reformulates the neural network encoding problem as a set encoding problem. Using a sequence of set-to-set and set-to-vector functions, SNE utilizes a pad-chunk-encode pipeline to encode each network layer independently; a sequence of operations that is parallelizable across chunked layer parameter values. SNE also utilizes the computational structure of neural networks by injecting positionally encoder layer type/level information in the encoding pipeline. As a result, SNE is capable of encoding neural networks of different architectures as opposed to previous methods that only work on a fixed architecture. Experimentally, we introduced the cross-dataset and cross-architecture neural network generalization performance prediction tasks. We demonstrated SNE's ability to transfer well across modelzoos of the same architecture but with networks trained on different datasets on the cross-dataset task. On the cross-architecture task, we demonstrated SNE's agnosticism to architectural choices and provided the first set of experimental results for this task. \begin{table} \begin{tabular}{l c} \hline \hline Arch\({}_{1}\rightarrow\text{Arch}_{2}\) & SNE \\ \hline MNIST\(\rightarrow\) MNIST & 0.452\(\pm 0.021\) \\ MNIST\(\rightarrow\) CIFAR10 & 0.478\(\pm 0.010\) \\ MNIST\(\rightarrow\) SVHN & 0.582\(\pm 0.016\) \\ \hline CIFAR10\(\rightarrow\) CIFAR10 & 0.511\(\pm 0.020\) \\ CIFAR10\(\rightarrow\) MNIST & 0.467\(\pm 0.020\) \\ CIFAR10\(\rightarrow\) SVHN & 0.594\(\pm 0.029\) \\ \hline SVHN\(\rightarrow\) SVHN & 0.621\(\pm 0.013\) \\ SVHN\(\rightarrow\) MNIST & 0.418\(\pm 0.096\) \\ SVHN\(\rightarrow\) CIFAR10 & 0.481\(\pm 0.055\) \\ \hline \hline \end{tabular} \end{table} Table 2: Cross-Architecture Neural Network Performance Prediction. We compare how SNE transfers across architectures.
2302.07957
ColibriES: A Milliwatts RISC-V Based Embedded System Leveraging Neuromorphic and Neural Networks Hardware Accelerators for Low-Latency Closed-loop Control Applications
End-to-end event-based computation has the potential to push the envelope in latency and energy efficiency for edge AI applications. Unfortunately, event-based sensors (e.g., DVS cameras) and neuromorphic spike-based processors (e.g., Loihi) have been designed in a decoupled fashion, thereby missing major streamlining opportunities. This paper presents ColibriES, the first-ever neuromorphic hardware embedded system platform with dedicated event-sensor interfaces and full processing pipelines. ColibriES includes event and frame interfaces and data processing, aiming at efficient and long-life embedded systems in edge scenarios. ColibriES is based on the Kraken system-on-chip and contains a heterogeneous parallel ultra-low power (PULP) processor, frame-based and event-based camera interfaces, and two hardware accelerators for the computation of both event-based spiking neural networks and frame-based ternary convolutional neural networks. This paper explores and accurately evaluates the performance of event data processing on the example of gesture recognition on ColibriES, as the first step of full-system evaluation. In our experiments, we demonstrate a chip energy consumption of 7.7 \si{\milli\joule} and latency of 164.5 \si{\milli\second} of each inference with the DVS Gesture event data set as an example for closed-loop data processing, showcasing the potential of ColibriES for battery-powered applications such as wearable devices and UAVs that require low-latency closed-loop control.
Georg Rutishauser, Robin Hunziker, Alfio Di Mauro, Sizhen Bian, Luca Benini, Michele Magno
2023-02-15T21:41:39Z
http://arxiv.org/abs/2302.07957v1
ColibriiES: A Milliwatts RISC-V Based Embedded System Leveraging Neuromorphic and Neural Networks Hardware Accelerators for Low-Latency Closed-loop Control Applications ###### Abstract End-to-end event-based computation has the potential to push the envelope in latency and energy efficiency for edge AI applications. Unfortunately, event-based sensors (e.g., DVS cameras) and neuromorphic spike-based processors (e.g., Loihi) have been designed in a decoupled fashion, thereby missing major streamlining opportunities. This paper presents ColibriiES, the first-ever neuromorphic hardware embedded system platform with dedicated event-sensor interfaces and full processing pipelines. ColibriiES includes event and frame interfaces and data processing, aiming at efficient and long-life embedded systems in edge scenarios. ColibriiES is based on the Kraken system-on-chip and contains a heterogeneous parallel ultra-low power (PULP) processor, frame-based and event-based camera interfaces, and two hardware accelerators for the computation of both event-based spiking neural networks and frame-based ternary convolutional neural networks. This paper explores and accurately evaluates the performance of event data processing on the example of gesture recognition on ColibriiES, as the first step of full-system evaluation. In our experiments, we demonstrate a chip energy consumption of 7.7 \(\mathrm{mJ}\) and latency of 164.5 \(\mathrm{ms}\) of each inference with the DVS Gesture event data set as an example for closed-loop data processing, showcasing the potential of ColibriiES for battery-powered applications such as wearable devices and UAVs that require low-latency closed-loop control. ## I Introduction Neuromorphic computing is a promising paradigm for low latency, energy-efficient artificial intelligence [1] for a wide range of embedded applications [2]. Complementing this novel processing paradigm, event-based sensors have the potential to unlock the full potential of neuromorphic hardware for applications such as intelligent sensing and ultra-fast robotics [3]. On the other hand, frame-based processing using deep neural networks (DNNs) is currently offering impressive performance and is being heavily tuned for low-power, and energy-efficient embedded platforms [4]. Combining the advantages of event-based image sensors, neuromorphic processing, traditional image sensors, and deep neural networks yields hybrid hardware and software systems that can outperform the current state of the art in low-latency, energy-efficient systems like unmanned vehicles. For instance, optical flow estimation is one of the most used methods for autonomous navigation. Improving its speed and accuracy will improve the entire autonomous vehicles field [5]. Our work aims at building the first hybrid system combining frame information with DNN-based processing and event information with SNN-based processing for autonomous vehicles and robots [6]. ### _Motivation_ Most current systems using visual sensing to perform intelligent tasks rely on frame-based cameras [12]. Conventional video sensors record the entire image at a fixed rate and resolution. These vision systems use sequences of RGB frames, which are heavily redundant and inefficient to process. Event-based cameras (silicon retinas) have introduced a new vision paradigm by mimicking the biological retina. Instead of measuring the intensity of every pixel at fixed time intervals, they report events of significant pixel intensity changes. Each such event is represented by its position, the sign of change, and timestamp in microsecond resolution. Because of their asynchronous operation principle, they are a natural match for spiking neural networks (SNNs) [13] rather than DNNs. State-of-the-art DNN approaches in machine learning provide excellent results for vision tasks with standard cameras. In contrast, asynchronous event sequences require special handling, and SNNs can take advantage of this asynchronicity, offering very low detection latency in the order of microseconds [14]. However, SNN-based algorithms are still evolving, and current SNNs are still not comparable in terms of accuracy to DNN-based approaches. Another aspect limiting the employment of energy-proportional event-based cameras in real application scenarios requiring ultra-low-power devices is the high power consumption spent at such sensors' interfaces. Event cameras often expose non-standard communication protocols, and data acquisition is commonly carried out with a high-power field-programmable gate array (FPGA), and a USB interface [15]. Neuromorphic processors execute SNNs on large-scale integrated circuits [16] by mimicking the natural biological structures of the nervous system, implementing neurons and synapses directly in silicon. A typical neuromorphic processor can simulate hundreds of thousands or even millions of spiking neurons in real time. However, to achieve truly power-efficient real-time operation, such platforms should also be equipped with microcontroller subsystems capable of performing actuation and closed-loop control. Even advanced prototypes, e.g., the Loihi Kapoho Bay board, do not present such features as they are generally conceived as research and evaluation testbeds. Kraken, a novel RISC-V-based system-on-chip (SoC) addressing this challenge, has been recently proposed [17]. Kraken includes a heterogeneous parallel ultra-low power (PULP) processor [18] and two hardware accelerators for both spiking neural networks and frame-based convolutional neural networks. A key feature of the Kraken SoC is that both RGB camera and event-based camera interfaces are present, as well as the general-purpose input output pins useful for direct control of motors and other devices. Kraken can deliver up to 2TOP/s/W 2bit SIMD on the RISC-V cluster and 1.1TSynOP/s/W on the neuromorphic accelerator, which makes it suitable for embedded battery-operated devices requiring low-latency closed-loop control. Table I compares our proposed ColibriES system with previous works implementing advanced tasks on neuromorphic processors. The only published works implementing true end-to-end neuromorphic applications are based on IBM's TrueNorth platform [19], a purely neuromorphic processor that lacks the advanced compute and control capabilities enabled by the Kraken SoC, relying solely on SNNs for all processing tasks, which in turn restricts the range of tasks that can be solved on the platform. ### _Contribution_ This paper presents ColibriES, the first-ever neuromorphic truly embedded evaluation platform hosting the Kraken SoC. The ColibriES board was designed to accurately measure the complete processing pipeline's power consumption, energy, and latency. ColibriES can connect to both event- and frame-based cameras that directly interface with the Kraken SoC. The platform is designed for battery-powered applications requiring low-latency closed-loop control feedback, such as wearable electronics, unmanned aerial vehicles (UAVs), and other applications that impose stringent requirements on the computational and energy resources available to perform the processing. Figure 1 shows the high-level architecture of the proposed platform and provides an overview of how the system could be used as a high-speed closed-loop controller. Evaluations of power consumption, functionality, and energy efficiency are presented with experimental measurements, demonstrating the versatility and low power of ColibriES. The rest of this paper is organized as follows: Section II describes the proposed ColibriES platform with a particular focus on Kraken's architectural features. Section III shows the experimental results, and section IV concludes the paper. ## II System Architecture In this section, we present the heart of ColibriES, the heterogeneous Kraken SoC, and the evaluation board used in our experiments. ### _Kraken SoC_ The Kraken SoC has a heterogeneous architecture composed of three main subsystems. Fig. 2 shows a block diagram representation of the Kraken chip. The first subsystem, the fabric controller (FC), is built around a 32-bit RISC-V core Fig. 1: Architecture of the embedded platform developed as evaluation board hosting the novel Kraken SoC, which includes PWM pins for high-speed control. \begin{table} \begin{tabular}{l|c|c|c|c|c|c} & [7] & [8] & [9] & [10] & [11] & **ColibriES (ours)** \\ \hline Platform & Loihi & TrueNorth & Loihi & Loihi & TrueNorth & Kraken (SNE) \\ Application \({}^{\text{a}}\) & KWS & KWS & GR & GR & GR & GR \\ Input Data & MFCC \({}^{\text{d}}\) & Audio & DVS + sEMG Evts. \({}^{\text{d}}\) & DVS Evts. & DVS Evts. & DVS Evts. \\ Dataset & Custom & TIDIGITS & Custom & DVS128 & DVS128 & DVS128 \\ \# of Classes & 2 & 11 & 5 & 11 & 11 \\ Network \({}^{\text{b}}\) & 2L SMLP & 7L SCNN & 6L SCNN + 4L SMLP & 5L SCNN & 16L SCNN & 7L SCNN \\ Accuracy & \(95.9\%\) & \(90.8\%\)-\(95.1\%\)\({}^{\text{e}}\) & \(96\%\) & \(90.5\%\) & \(86.5\%\)-\(94.6\%\)\({}^{\text{e}}\) & \(83\%\) \\ End-to-End? & ✗ & ✓ & ✗ & ✓ & ✓ & ✓ \\ \(P_{proc,inf}\) (mW) \({}^{\text{b}}\) & 110 & 14.4 (85)-38.6 (91) \({}^{\text{e}}\) & 141.9 & N/A & 88.5 - 178.8 & 35.6 \\ \(P_{proc,inf}\) (mW) \({}^{\text{b}}\) & 29.2 & 12.1 (82.7) - 30.3 (82.7) \({}^{\text{e}}\) & 29.2 [7] & 29.2 [7] & 68.8 - 134.4 \({}^{\text{f}}\) & 17.7 \\ \(E_{proc,inf}\) (mJ) \({}^{\text{d}}\) & 0.371 & 15.9 (93.6) - 42.5 (100.1) \({}^{\text{g}}\) & 5.9 & N/A & 29.8 & **7.7** \\ \end{tabular} * \({}^{\text{a}}\)**KWS:** Keyword Spotting, **GR:** Gesture Recognition * \({}^{\text{b}}\)**N/L SCNN:** \(N\)-layer spiking convolutional neural network, _N_**L** **SMLP:**\(N\)-layer spiking multi-layer perceptron * \({}^{\text{c}}\)\(P_{proc,inf,idle}\):** Processor chip/full system chip power during inference/while idle * \({}^{\text{d}}\)\(E_{proc,inf}\):** Processor chip/full system energy per inference. Energy normalized to 6 inf/s for GR. * \({}^{\text{e}}\)\({}^{\text{f}}\)\({}^{\text{c}}\)\({}^{\text{b}}\)\({}^{\text{e}}\)\({}^{\text{f}}\) Lowest-power and highest-accuracy implementations. Numbers in parentheses are energy/power omit scaling for resource usage. * \({}^{\text{g}}\) We assume the preprocessing and classification pipeline needs to run for the entire length of a sample (\(1104\,\text{ms}\)). \end{table} TABLE I: Comparison of ColibriES to recent work which acts as the central programmable control unit for the whole SoC. The FC hosts the main interconnection busses to the main L2 memory and the advanced peripheral bus (APB), which controls all the SoC peripherals. The FC domain also contains a full set of on-chip peripherals such as a RISC-V compliant debug unit, four programmable timer-counters and a power management unit. The second subsystem is a RISC-V-based general-purpose accelerator called the _cluster_. The cluster domain hosts eight RISC-V cores enhanced with specialized instruction set architecture (ISA) extensions like hardware loops, multiply and accumulate, and vectorial instructions for low-precision machine learning (ML) workloads. Each RISC-V core of the cluster accelerator is equipped with a private floating point unit (FPU) and implements the full 32-bit RISC-V floating-point ISA extension. The cluster also features a \(128\,\mathrm{K}\mathrm{b}\) L1 tightly coupled data memory (TCDM). The L1 memory can serve memory requests from all cluster cores in parallel with single-cycle access latency and a low average contention rate (\(<\)10% even on the most data-intensive kernels). Fast event management, parallel thread dispatching, and synchronization are supported by a dedicated hardware block (HW Sync), enabling very fine-grained parallelism and high energy efficiency in parallel workloads. The cluster can be clock-gated at single-core granularity, reducing the dynamic power consumption while waiting for core synchronization. The third domain, the external hardware processing engine (EHWPE), hosts two accelerators. One is the sparse neural engine (SNE) neuromorphic accelerator presented in [17], and the other is CUTIE, a ternary weight neural network accelerator presented in [20]. The EHWPE domain operates on two independent clock domains. One clock is generated by the FC frequency locked loop (FLL), which is used to clock the interface logic at the boundary between the accelerators and the main system interconnect. The other clock is generated by a dedicated EHWPE FLL, which is used to clock the accelerator's computing engines. Like many peripherals of the SoC, the accelerators are programmed via memory-mapped register interfaces. The FC domain hosts a vast set of IO peripherals commonly adopted on microcontroller-like SoCs. The chip features 76 digital pads, 44 of which can be used to arbitrarily map any pin of the peripherals mentioned above to a chip pad in an all-to-all crossbar configuration, providing high IO mapping flexibility. The pin mapping is configurable via a memory-mapped register interface connected to the APB. Each peripheral can generate interrupts depending on data transmission events; such events can then be routed to the interrupt controller of the RISC-V core. An autonomous IO subsystem, the _\(\mu\)DMA_[21], hosts all the IO peripherals. This system can be programmed to autonomously orchestrate data transfers from the peripherals to the L2 memory and vice versa, freeing the RISC-V core from micro-managing such data transfers. Moreover, Kraken's \(\mu\)DMA is equipped with a dedicated hardware interface for the ultra-low power dynamic visual sensor (DVS) presented in [22]. Such an interface was presented in [23] and allows streaming events directly from the event camera to Kraken. To carefully control the chip's overall power consumption and enable a versatile application-specific power management scheme, Kraken is subdivided into three independent power domains. Such power domain partitioning allows to switch-off of unused parts of the system that the running application might not require. The power-up/down of the controllable domains can be performed at run time. Figure 2 also shows the chip power domain partitioning, SoC, Cluster, and EHWPE, respectively. The FC domain is always on because it hosts essential components needed for SoC management tasks such as the boot procedure, clock generator initialization, or accelerator reset and programming. ### _Kraken evaluation board_ The Kraken evaluation board integrates the Kraken chip with the external components needed to implement complete applications. A USB-C connector is used for JTAG and UART data transfer and supplies a \(5\,\mathrm{V}\) rail from which all other power supplies are derived. Kraken's three power domains are supplied by individually runtime-configurable buck converters, allowing for application-controlled dynamic voltage and frequency scaling (DVFS). Connectivity to the DVS camera is provided through an 80-pin low-profile board-to-board connector, which also supplies the camera board with the required \(1.2\,\mathrm{V}\) and \(5\,\mathrm{V}\) rails. Off-chip memory is present in the form of a combined HyperFlash/HyperRAM chip and a quad-SPI flash memory chip, which can also be used to store application code for standalone booting. Arduino headers and a camera parallel interface (CPI) ribbon connector provide additional connectivity, with level shifters between every off-chip connector and Kraken's I/O pins. An annotated photograph of the board is shown in Fig. 3. ## III Evaluation \begin{table} \begin{tabular}{c|c|c|c|c|c} & Type & Size & Feature Size & Features & Stride \\ \hline 0 & Input & 128x128x2 & - & - & - \\ 1 & Pool & 32x32x2 & 4x4x1 & 2 & 4 \\ 2 & Conv & 32x32x16 & 3x3x32 & 16 & 1 \\ 3 & Pool & 16x16x16 & 2x2x1 & 16 & 2 \\ 4 & Conv & 16x16x32 & 3x3x1 & 32 & 1 \\ 5 & Pool & 8x8x32 & 2x2x1 & 32 & 2 \\ 6 & Full & 512 & 2048 & 512 & - \\ 7 & Full & 11 & 512 & 11 & - \\ \end{tabular} \end{table} TABLE II: DVS Gesture Network Parameters Fig. 2: Kraken SoC block diagram In this section, we evaluate the end-to-end performance of ColibriES when executing SNN inference as the first step of the full-system evaluation. We used the IBM DVS Gesture event data set [24], which contains 11 hand gestures from 29 subjects under three illumination conditions from a DVS128 event camera. Six subjects' data are used for testing and the remaining for training. The SNN used for our experiments is reported in Table II. The network is composed of two convolutional layers and two fully-connected layers. To train the network, we used a supervised learning approach based on back-propagation, derived from the one presented in [25], which has been implemented in PyTorch. In this framework, the Neuron dynamic of the leaky-integrate and fire (LIF) neurons implemented in SNE is accurately modeled to closely reflect the hardware implementation and minimize the mismatch between the accuracy achieved during training and the one achieved on the real hardware platform. We break down the closed loop into three distinct parts: data acquisition on the FC through the dedicated DVS interface, data processing on the engines, which includes a spike preprocessing step in the cluster and a spike train inference step in the SNE, and actuators control using PWM signals. For advanced processing tasks, neural networks that exceed SNE's output neuron capacity are executed on the accelerator in a tiled way, and the SNE is used in a time-domain-multiplexing fashion. The preprocessing step performed on the cluster is necessary to assemble a single input event stream from multiple output tiles and create the tiled input streams for the tiles of the successive layer. Note that the main system clock of Kraken can vary from a few \(\mathrm{kHz}\) to \(200\,\mathrm{MHz}\), and the typical operating frequency is \(50\,\mathrm{MHz}\). In typical operating conditions, the latency of changing an output actuation signal like a PWM signal is a few system clock cycles, i.e., below 1 \(\upmu\)s. Therefore, we considered this latency negligible compared to data acquisition and processing. Table III reports the energy and latency metrics of the ColibriES system for the gesture recognition task execution. Using a 300 milliseconds window input, Kraken requires 164.5 \(\mathrm{ms}\) and 7.7 \(\mathrm{mJ}\) to perform a DVS-to-label prediction, meeting real-time processing constraints. The inference execution on SNE takes only 32 \(\mathrm{ms}\) and 1.4 \(\mathrm{mJ}\). ## IV Conclusion This paper presented ColibriES, an energy-efficient heterogeneous platform including an event-based camera interface and an RGB interface, together with the control features of a standard microcontroller. The platform core is the Kraken system-on-chip, a PULP processor with eight RISC-V cores and two dedicated hardware accelerators for machine learning: a neuromorphic processor and a frame base machine learning accelerator. As the first step of system evaluation on ColibriES, this paper reports the evaluation of end-to-end gesture recognition from event camera data with the dedicated DVS interface and the neuromorphic core. Thanks to the direct access to the stream of events from the event camera, ColibriES can achieve real-time event processing by executing SNN inference on sensor data. Experimental results show that our platform can perform low-latency tasks under stringent power constraints, and it is suitable for battery-powered operation and low-latency applications. On the IBM DVS Gesture event data set, ColibriES achieves real-time 164ms processing on 300ms time window samples, consuming only 7.7mJ of total energy and meeting the severe latency and energy constraints of emerging high-speed closed-loop control applications, such as autonomous drones and wearable control embedded systems, while keeping the power in the mW range. ## Acknowledgment The authors would like to thank _armasuisse Science & Technology_ for funding this research. Fig. 3: Kraken evaluation board \begin{table} \begin{tabular}{l r r r r} \hline \hline \multirow{2}{*}{Proc. Stage} & \multirow{2}{*}{Time (ms)} & \multicolumn{2}{c}{Power (\(\mathrm{mW}\))\({}^{\mathrm{a}}\)} & \multirow{2}{*}{Energy (\(\mathrm{mJ}\))\({}^{\mathrm{c}}\)} \\ \cline{3-4} & & Idle & Active & \\ \hline Data Acquisition (FC) & 1.5 & 3.5 & 3.8 & 0.006 \\ \hline Preprocessing & 131 & 6.5 & 34 & 4.6 \\ (Cluster)\({}^{\mathrm{a}}\), & & & & \\ \hline SNN Inference (SNE) & 32 & 7.7 & 44 & 1.4 \\ \hline \hline \multicolumn{4}{l}{**Total**} \\ \hline \hline \multicolumn{4}{l}{All power numbers measured at \(V_{DD}=0.65\,\mathrm{V}\)} \\ \multicolumn{4}{l}{\({}^{\mathrm{a}}\) Idle power measured with respective components clock-gated.} \\ \multicolumn{4}{l}{\({}^{\mathrm{b}}\) Total energy is computed as the sum of active energy contributions and idle energy of inactive components.} \\ \multicolumn{4}{l}{\({}^{\mathrm{c}}\) Average total power consumption during inference} \\ \end{tabular} \end{table} TABLE III: Performance metrics of ColibriES on DVS-HandGesture Dataset with accuracy of 83%
2306.02300
How neural networks learn to classify chaotic time series
Neural networks are increasingly employed to model, analyze and control non-linear dynamical systems ranging from physics to biology. Owing to their universal approximation capabilities, they regularly outperform state-of-the-art model-driven methods in terms of accuracy, computational speed, and/or control capabilities. On the other hand, neural networks are very often they are taken as black boxes whose explainability is challenged, among others, by huge amounts of trainable parameters. In this paper, we tackle the outstanding issue of analyzing the inner workings of neural networks trained to classify regular-versus-chaotic time series. This setting, well-studied in dynamical systems, enables thorough formal analyses. We focus specifically on a family of networks dubbed Large Kernel Convolutional Neural Networks (LKCNN), recently introduced by Boull\'{e} et al. (2021). These non-recursive networks have been shown to outperform other established architectures (e.g. residual networks, shallow neural networks and fully convolutional networks) at this classification task. Furthermore, they outperform ``manual'' classification approaches based on direct reconstruction of the Lyapunov exponent. We find that LKCNNs use qualitative properties of the input sequence. In particular, we show that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models. Low performing models show, in fact, analogous periodic activations to random untrained models. This could give very general criteria for identifying, a priori, trained models that have poor accuracy.
Alessandro Corbetta, Thomas Geert de Jong
2023-06-04T08:53:27Z
http://arxiv.org/abs/2306.02300v1
# How neural networks learn to classify chaotic time series ###### Abstract Neural networks are increasingly employed to model, analyze and control non-linear dynamical systems ranging from physics to biology. Owing to their universal approximation capabilities, they regularly outperform state-of-the-art model-driven methods in terms of accuracy, computational speed, and/or control capabilities. On the other hand, neural networks are very often they are taken as black boxes whose explainability is challenged, among others, by huge amounts of trainable parameters. In this paper, we tackle the outstanding issue of analyzing the inner workings of neural networks trained to classify regular-versus-chaotic time series. This setting, well-studied in dynamical systems, enables thorough formal analyses. We focus specifically on a family of networks dubbed Large Kernel Convolutional Neural Networks (LKCNN), recently introduced by Boulle et al. (2021). These non-recursive networks have been shown to outperform other established architectures (e.g. residual networks, shallow neural networks and fully convolutional networks) at this classification task. Furthermore, they outperform "manual" classification approaches based on direct reconstruction of the Lyapunov exponent. We find that LKCNNs use qualitative properties of the input sequence. In particular, we show that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models. Low performing models show, in fact, analogous periodic activations to random untrained models. This could give very general criteria for identifying, a priori, trained models that have poor accuracy. **Keywords:** Dynamical systems, chaos, deep learning, convolutional networks, time series, classification, Savitsky-Golay. ## 1 Introduction During the last decade there has been a strong acceleration in the adoption of machine learning, typically through artificial neural networks, to model, analyze, and control a broad spectrum of dynamical systems [1]. Neural networks are nonlinear parametric models satisfying universal approximation principles [13]. Typically, they are trained in data-driven contexts to fit pre-annotated data. Beside boosting well-known super-human performance in con texts as automated vision [21], natural-language processing [14], and long-term strategy games [20], they are gaining substantial momentum in fundamental research in connection with non-linear/chaotic dynamics. In these contexts they are regularly surpassing "traditional" state-of-the-art model-driven approaches [15]. Asteunding are the achievements connected to three-body problem long-term forecasts [16], weather modeling [17], fluid turbulence closures [18, 19], measurements [20], and control [13]. Machine learning methodologies allowed progress in scientific context-agnostic issues, e.g., model-free chaotic dynamics predictions [22], as well as the problem dealt with in this paper: the classification of time series between chaotic and regular. Dynamics that can bifurcate from regular to chaotic and vice versa are present in every scientific discipline: astronomy, biology, meteorology, physics etc. [15, 16, 17, 18, 19, 20]. While the issue of their classification is thus longstanding (e.g. [14, 15, 18, 20, 21, 22]), recent works by Boulle et al. [1] showed that convolutional neural networks, specifically with large kernels, can be strikingly successful at the task. In general, convolutional networks [19] have proven suitable for time series classification [21] and inference [20]. Building on this approach, Boulle et al. proposed to apply Large Kernel Convolutional Neural Networks, LKCNN. Large kernel means that the kernel size of the convolutional layers is large in relation to the input sequence length. Boulle et al. show that LKCNNs boast highest performance in comparison with other established machine learning approaches such as residual, fully convolutional, multilayer perceptrons and shallow networks [23, 24]. Besides they manage generalization properties to data outside the training set. This creates the case for the two-fold analysis of this paper, only possible because of two factors: the relative technical simplicity of LKCNNs and the vast understanding on chaotic maps. Thus, this paper contributes thorough comparisons in terms of accuracy with traditional approaches, currently missing and a formal analysis of the inner workings of LKCNNs that allow such performance. In general, neural networks allow no straightforward insight into their internal mechanics, especially when it comes to large and complex networks [1]. Thus, while networks are ubiquitously employed as (effective) data-driven black boxes, understanding the inner workings, e.g. identifying the features that they leverage on, could become a crucial component towards new discoveries. In this paper, we analyze how neural networks successfully manage the long-standing challenge of classifying discrete-time series produced by dynamical systems, identifying regular vs. chaotic motions. Here, we will take the approach of identifying chaos through sensitive dependence on initial conditions by computing the Lyapunov exponent which measures the average rate of exponential departures of small perturbations [1, 2]. Formally, we consider the following classification problem: Let \(f:\mathbb{R}\to\mathbb{R}\) be a smooth map recursively generating the bounded time series \(\{x_{1},x_{2},\ldots\}\) as \[x_{n+1}=f(x_{n}), \tag{1}\] for a given initial condition \(x_{0}\). We aim at classifying finite-length time sequences between _chaotic_ or _regular_ without knowledge of the analytical expression of \(f\). In other terms, given finite sequences \[x^{\ell}=(x_{1},x_{1},\ldots,x_{N})\in\mathbb{R}^{N},\] we target accurate classifiers \(G\), such that \[G:\mathbb{R}^{N} \to L:=\{\text{chaotic},\ \text{regular}\}, \tag{2}\] \[x^{\ell} \mapsto\ell\in L.\] Analogously to [1], we shall consider \(N=500\). Note that if the function \(f\) and its derivative are known analytically, the classification labels in \(L\) can be immediately determined by estimating the Lyapunov exponents of \(f\). More specifically, the Lyapunov exponent, \(\lambda\), corresponding to \(\{x_{n}\}\) generated by \(f\) is defined as \[\lambda=\lim_{n\to\infty}\frac{1}{n}\sum_{i=0}^{n-1}\log(|f^{ \prime}(x_{i})|), \tag{3}\] if the limit exists. Bounded sequences are chaotic when \(\lambda>0\), as attractors with \(\lambda>0\) can only be bounded if a type of folding process merges widely separated trajectories (e.g. [12, 2]). Conversely, when only data in the form of sequences \((x^{\ell})\) is available, direct approaches which reconstruct the Lyapunov exponent by estimating \(f^{\prime}\) could be used. These approaches, however, require a sufficiently large \(N\) (e.g. [12, 1]). Our experiments show that our choice \(N=500\) is large enough to capture the dynamics and short enough that direct approaches are error prone. Here, we analyze the inner workings of LKCNNs when classifying time series from well-known 1D maps: the logistic map and the sine-circle map. LKCNNs are conceptually simple: this enables us to perform in-depth experimental and formal analyses. Additionally, we compare LKCNNs performance with an estimate of \(\lambda\) via a local polynomial reconstruction of \(f^{\prime}\) using Savisky-Golay filtering [12]. We restrict to a data set made of non-periodic trajectories with Lyapunov exponents close to zero as these are the most challenging to classify: small perturbations can lead in fact to misclassification. For these, the derivative along the orbits typically oscillate around \(1\). Overall, we show that: * LKCNNs use qualitative properties of the input sequence as they outperform direct reconstruction of the Lyapunov exponent from the time series. Our experiment is performed on a restricted data set with only non-periodic sequences. These are hard to classify as the average rate of separation between consecutive time-steps is almost constant. * LKCNNs map periodic inputs to periodic activation with exception of the dense layers. We can capture this mapping in a two dimensional matrix which we refer to as the _period matrix_. The period matrix is model-independent over generic untrained LKCNNs and corresponds to a property of the architecture. We prove this property in a limit condition. * Grouping trained LKCNNs by period matrices, we observe a single period matrix which correlates with low performance models. This period matrix is equal to the period matrix of generic untrained models. This last insight might also be useful for addressing more general models and settings. The remainder of this paper is organized as follows. In Section 2, we present the dynamical systems for the classification problem, datatsets and performance metrics. In Section 3, we present a method to directly reconstruct the Lyapunov exponent from the times series with Savitsky-Golay polynomial filtering. In Section 4, we revise LKCNN, present the architecture, explain the choice of hyperparameters and data sets. In Section 5, we present the main results related to Lyapunov reconstruction with Savitsky-Golay and periodic activation. A final discussion closes the paper. ## 2 Time series, data sets, performance metrics ### Time series: the logistic and sine circle map As in [1], we analyze LKCNN considering data sets built using two well-known maps: the logistic map and the sine-circle map. We briefly review some crucial features of the time series that these maps generate. The logistic map [2] is given by \[f(x)=\mu x(1-x),\;\mu\in[0,4],\;x_{0}=0.5, \tag{4}\] where \(\mu\) is the bifurcation parameter. Observe that \(x_{n}\in[0,1]\) for all \(\mu\). The bifurcation diagram corresponding to long-term evolution of the orbits is given in Figure 0(a). The bifurcation diagram exhibits orbits doubling in period with geometric rate. After the period-doubling accumulation point, \(\mu\approx 3.56995\), chaos onsets and the map exhibits alternation of regular and chaotic behavior [13]. The second map considered is the sine-circle map [H\({}^{+}\)00, Boy86, Her79]: \[f(\theta)=\theta+\Omega-\frac{\beta}{2\pi}\sin 2\pi\theta\mod 1,\;\;\theta_{0}=0.5\;. \tag{5}\] The sine-circle map also exhibits a transition from regular to chaotic dynamics as the bifurcation parameter \(\beta\) changes. For \(\beta<1\) only periodic and quasi-periodic trajectories occur since the sine-circle map is invertible. This implies that the folding dynamics which characterizes chaotic dynamics cannot occur [H\({}^{+}\)00]. As in [1] we consider \(\Omega=0.606661\), \(\beta\in[0,5]\). For this particular \(\Omega\) value, chaotic dynamics occur right after \(\beta=1\) [H\({}^{+}\)00]. The bifurcation diagram corresponding to long-term evolution of the orbits is given in Figure 0(b). ### Data sets Given a discrete parameter set we consider trajectories corresponding to long-term evolution of the time series generated by (4) and (5). We use log-spaced data for the logistic data set as chaotic and regular trajectories are better balanced as a consequence of the geometric doubling over the parameter space. In [1], the linear-spaced data set is considered for the logistic map. This data set is biased towards regular trajectories, see Table 1. We observe that periods of periodic orbits are much more uniformly distributed in the log-spaced data set than in the linear-spaced data set, Figure 2. Moreover, periodic orbits with low orbit periods are over-represented in the linear data set. For the sine-circle map we consider a linear-spaced parameter set. The regular trajectories can exhibit periodic and non-periodic behavior. The chaotic trajectories are all non-periodic. We note that the sign of the short time Lyapunov exponent is a good predictor of the sign of the converged Lyapunov exponent if the absolute Lyapunov exponent is sufficiently large. The sine-circle data set is well-balanced between chaotic and regular trajectories, see Table 1. We can heuristically explain why regular orbits make up at least \(\approx 60\%\). For \(\beta\in(0,1)\) the motion is periodic or quasi-periodic [H\({}^{+}\)00] and from Figure 0(b) we observe that period 1, period 2 and period 4 cover a parameter range close to length 2. Figure 1: Bifurcation diagrams: In (a) the parameter corresponding to the onset of chaos is indicated by a red line. In (b) the parameter corresponding to the transition from invertible to non-invertible is indicated by the red line. Invertible maps do not exhibit chaotic dynamics. ### Performance metrics The performance is assessed using the classification accuracy which is defined as \[\text{Accuracy}:=\frac{T_{\text{regular}}+T_{\text{chaotic}}}{T_{\text{regular}}+T _{\text{chaotic}}+F_{\text{regular}}+F_{\text{chaotic}}},\] where \(T_{\text{regular}}=\) true regular predictions, \(T_{\text{chaotic}}=\) true chaotic predictions, \(F_{\text{regular}}=\) false regular predictions and \(F_{\text{chaotic}}=\) false chaotic predictions. Our test sets are well-balanced hence we do not need to consider alternatives [1, 2]. We will also consider a precision measure for chaotic and regular orbits: \[P_{\text{chaotic}}:=\frac{T_{\text{chaotic}}}{T_{\text{chaotic}}+F_{\text{chaotic }}},\qquad P_{\text{regular}}:=\frac{T_{\text{regular}}}{T_{\text{regular}}+F_{ \text{regular}}}. \tag{6}\] \begin{table} \begin{tabular}{l r r r} \hline \hline Data sets & Chaotic & \multicolumn{2}{c}{Regular} \\ \cline{3-5} & & Periodic & Non-periodic \\ \hline Logistic, linear-spaced & 27\% & 72 \% & 1 \% \\ Logistic, log-spaced & 46\% & 45 \% & 9 \% \\ Sine-circle, linear-spaced & 41\% & 49\% & 10 \% \\ \hline \hline \end{tabular} \end{table} Table 1: Subdivision data sets: The logistic, log-spaced data set is a more balanced data set in comparison to the logistic linear-spaced data set. For the log-spaced data the parameters are logarithmically spaced at the onset of chaos. We note that regular non-periodic orbits for the logistic map typically correspond to orbits which accumulate around orbits of low period whereas regular non-periodic orbits for the sine-circle map can exhibit quasi-periodic dynamics which covers a large domain of the phase space. Figure 2: Logistic map period distributions for linear- and log-spaced data: We consider periodic orbits before and after the onset of chaos. The period distribution for the log-spaced data set is more uniform than the linear-spaced data set. Direct reconstruction of Lyapunov exponents via Savitsky-Golay polynomial filtering We can reconstruct the Lyapunov exponent directly from the times series by approximating derivatives. Here, we consider Savitsky-Golay as the fluctuations occurring for Lyapunov exponent close to zero are smoothed out which makes it outperform the classical method [22, 1]. Savitsky-Golay is used to obtain the graph of \(f\) (1). The coefficients of the Savitsky-Golay approximation can be used to compute the derivative of \(f\). Standard Savitsky-Golay works on an equidistant grid [15] but the extension to a non-equidistant grid is straightforward. We will briefly outline the method. The approximation of the graph of \(f\) is described by \((x_{i},x_{i+1})\). We sort these points with respect to \(x_{i}\). Denote the reordered points by \((\overline{x}_{i},y_{i})\). Consider \(k\) consecutive points \((\overline{x}_{i},y_{i})\) through which we interpolate an \(m\)-degree polynomial with coefficients \(\mathbf{c}\in\mathbb{R}^{m}\). For convenience we will consider \(i=1,\cdots,k\). Take \(\kappa=(k-1)/2\). Define the non-equidistant design matrix by \(A_{ij}=(\overline{x}_{i}-\overline{x}_{\kappa})^{j}\). Define \(\mathbf{y}=(y_{1},\cdots,y_{k})^{T}\). The set of equations that need to be solved are \[A\mathbf{c}=\mathbf{y}.\] From \(\mathbf{c}\) we can obtain the derivatives at \(x_{i}\) by straightforward calculus. Finally, we log-average over these derivatives to obtain an estimate for the Lyapunov exponent. The graph of the attractor will generally consist of components which are disconnected from each other. So we perform Savitsky-Golay polynomial filtering on each component separately. Finally, we note that this procedure does not directly extend to higher dimensions. ## 4 Large Kernel Convolutional Neural Networks (LKCNN) We review here the core features of Large Kernel Convolutional Neural Networks (LKCNN) as introduced in [1]. A Large Kernel Convolutional Neural Network has convolutional layers with kernel size large in relation to its feature channels. Conversely, standard convolutional networks usually consider a large number of feature channels in relation to the kernel size [12, 21]. In this paper, as in [1], we use LKCNN for binary classification of sequences. ### Architecture We report in Figure 3 the LKCNN architecture we employ to tackle (2). Specifically, we stack the following layers in a feed-forward fashion: * two convolutional layers (kernel size: 100, large in relation to the signal length; stride 2; relu activation); * maxpooling layer (pool size 2); * dropout regularization layer (rate 0.5); * dense layer (sigmoid activation); * final dense layer with softmax activation over the two classes in \(L\). In other words, the network outputs a probability vector \((p_{\text{regular}},p_{\text{chaotic}})=(p_{\text{regular}},1-p_{\text{regular}})\). We consider an input sequence as chaotic if \(p_{\text{chaotic}}>0.5\) and regular otherwise. We employ cross-entropy as training loss. The overall architecture is the same as in [1] but with different hyperparameters. In fact, we use exclusively the Lyapunov exponent (3) to assign the classification labels on the training set. In particular, we estimate the Lyapunov exponent employing orbits much longer than \(N=500\) steps. Our data sets are then built by restricting these long orbits to chunks \(N\)-steps long. On the opposite, [1] considers a ground truth labeling criterum based on a combination of Lyapunov exponent and Shannon entropy. In Appendix B, we provide exhaustive analysis of the network performance considering various strides, activation functions and size of the first dense layers. As in [1], we observe that increasing the number of convolutional layers does not improve the performance. This suggests that sequence information is maximally condensed by 2 convolutional layers. Figure 3: LKCNN architecture: The input time series \(x\) is classified as chaotic or regular. LKCNN have a typical CNN architecture but the convolutional layers have a large kernel size. The two Conv1D each have 5 filters with kernel size 100. The loss function used is cross-entropy. The internal analysis of the network will focus on the effect of the input on the activation at the flatten layer, highlighted in orange. Specifically, we will see how the activation of the layers in the blue dashed rectangle is mapped to an activation at the flatten layer. ## 5 Uncovering the hidden workings of LKCNNs ### LKCNNs can classify non-periodic orbits and outperforms Lyapunov reconstruction We subdivide the input sequences corresponding to regular orbits in periodic and non-periodic. As in [1] we train the LKCNN on the logistic log-spaced data set and test it on the sine-circle map, see Table 2. The training accuracy was set to 0.975. We observe low performance on the regular non-periodic subset. This is to be expected since the non-periodic subset of the logistic and sine-circle map are qualitatively different. If we train the network on the sine-circle data set we observe that the accuracy on the training set will not exceed 0.8. If we remove the periodic trajectories from the data set we observe accuracy exceeding 0.8, see Table 2. Hence, this is a labeling problem where the regular trajectories need to be subdivided into periodic and non-periodic trajectories to distinguish them from chaotic trajectories. \begin{table} \begin{tabular}{l l l l l} \hline \hline & & & \multicolumn{2}{c}{Accuracy} \\ \cline{3-5} Training set & Test set & Chaotic \(\mu\pm\sigma\) & \multicolumn{2}{c}{Regular \(\mu\pm\sigma\)} \\ \cline{3-5} & & & Periodic & Non-periodic \\ \hline Logistic & Logistic & 0.98 \(\pm\) 0.0075 & 0.98 \(\pm\) 0.0042 & 0.94 \(\pm\) 0.014 \\ log-spaced & log-spaced & & & \\ Logistic & sine-circle & 0.88 \(\pm\) 0.090 & 0.93 \(\pm\) 0.052 & 0.045 \(\pm\) 0.01 \\ log-spaced & linear-spaced & & & \\ \hline \hline \end{tabular} \end{table} Table 2: Performance of the LKCNN on our data subsets. We observe that training on the logistic set generalizes well to sine-circle set on the chaotic and periodic data subset of the sine-circle map but performs poorly on the regular periodic set of the sine-circle map. The latter is to be expected since the non-periodic trajectories are qualitatively different. \begin{table} \begin{tabular}{l l l} \hline \hline Model & \(P_{\text{chaotic}}\) & \(P_{\text{regular}}\) (non-periodic) \\ \hline LKCNN & 1.0 & 1.0 \\ Short-time Lyapunov exponent & 1.0 & 0.99 \\ input sequence & & \\ Direct reconstruction Lyapunov exponent & 1.0 & 0.93 \\ with Savitsky-Golay & & \\ \hline \hline \end{tabular} \end{table} Table 3: Precision of models trained on non-periodic sine-circle orbits (cf. Equation (6)): We observe that the short-time Lyapunov exponent which is the Lyapunov exponent computed over the input sequence very accurately predicts the long time Lyapunov exponent. Hence, we can compare LKCNN prediction to prediction based on direct reconstruction of Lyapunov exponents via Savitsky-Golay polynomial filtering. The LKCNN outperforms the reconstruction. In Table 3, we investigate the performance on non-periodic data for LKCNN, average Lyapunov exponent over the input sequence which we refer to as short time Lyapunov exponent and direct reconstruction of Lyapunov exponent with Savitsky-polynomial filtering, Section 3. Short time Lyapunov exponent is obtained with knowledge of \(f\) and direct reconstruction with Savitsky-Golay is obtained without knowledge of \(f\). As the short time Lyapunov exponent is nearly perfect it implies that if the direct reconstruction can perfectly determine the derivatives then it can nearly perfectly classify the sequence. Hence, we shall compare the performance of the direct reconstruction to LKCNN. We observe that LKCNN performs significantly better. To add more rigor the data set contains sequences where the Lyapunov exponent has converged to \(k\)-decimal precision which are then rounded to \(k\)-decimals to determine the label. This methodology applies to the short time Lyapunov exponent and the direct reconstruction of Lyapunov exponent in Table 3. The results in Table 3 are for \(k=4\) but these results persist, see Appendix D. ### Mapping periodic input to periodic activation is model-independent over generic untrained LKCNN We consider the LKCNN trained on the log-spaced logistic data set, Section 2.2. The activations for the first convolutional layer to the flatten layer preserves non-periodic and periodic structures. For a chaotic sequence the activations are non-periodic and for a periodic sequence the activations are periodic, see Figure 4. We first present a rigorous result on the periodicity of a convolutational layer with stride \(s\) for periodic inputs and then extend this result to LKCNN. #### 5.2.1 Periodic activations of convolutional layers Borrowing from equivariant theory for convolutional networks [14], we show that in an idealized setting where the network's input sequence has infinite length the convolutional filter applied to a \(k\)-periodic sequence yields an activation which is \(k\)-periodic. Furthermore, for stride \(s\) the period becomes \(k/s\) if \(k\) is divisible by \(s\). Denote by \(F:\mathbb{Z}\to\mathbb{R}\) the sequence feature map which is the map associated to a sequence \(\{x_{i}\}\) such that \(F(i)=x_{i}\). Denote the convolutional filters by \(\phi:\mathbb{Z}\to\mathbb{R}\). The convolutional filters also take as input the sequences indexes. A map \(F\) is \(k\)-periodic if \(F(z+k)=F(z)\) for all \(z\in\mathbb{Z}\). Convolution of a sequence feature map \(F\) by the filter \(\phi\) is defined by \[[F*\phi](z):=\sum_{y\in\mathbb{Z}}F(y)\phi(z-y).\] Define \(S:\mathbb{Z}\to\mathbb{Z}\) by \(S(z)=sz\) with \(s\in\mathbb{N}\). Then a convolution with stride \(s\) is given by \([F*\phi]\circ S\). Denote \(k\in\mathbb{N}\) divisible by \(s\) as \(s|k\). **Lemma 7**.: _Let \(F\) be \(k\)-periodic. If \(s|k\) then the period of \([F*\phi]\circ S\) is \(k/s\). If \(s\nmid k\) then the period of \([F*\phi]\circ S\) is \(k\)._ Figure 4: Activations LKCNN: We visualize the activation for a correctly classified chaotic and periodic trajectory. From the first convolutional layer to the flatten layer the non-periodic, (a), or periodic structure, (b), is preserved. The last two layers are dense and do not preserve any periodic or non-periodic structure. Observe that if \(F\) is \(k\)-periodic and \(s=1\) then the period of \([F*\phi]\circ S\) is \(k\). Note that \([F*\phi]\) can have period less than \(k\) for suitably chosen \(\phi\). For example, if \(\phi\) is the zero function then \([F*\phi]\) has period \(1\). Proof.: Take \(\hat{k}\in\mathbb{N}\). We define \(\hat{y}=y-s\hat{k}\). We can write \[F(y)\phi(s(z+\hat{k})-y)=F(\hat{y}-s\hat{k})\phi(sz-\hat{y}).\] Using the above we can write \[[F*\phi]S(z+\hat{k})=\sum_{\hat{y}\in\mathbb{Z}}F(\hat{y}-s\hat{k})\phi(sz-\hat {y}).\] Hence, if \(s|k\) then we obtain that \([F*\phi]S(z+\hat{k})=[F*\phi]S(z)\) for \(\hat{k}=k/s\) and if \(s\nmid k\) then we obtain that \([F*\phi]S(z+\hat{k})=[F*\phi]S(z)\) for \(\hat{k}=k\). #### 5.2.2 Periodic activations of LKCNN The result from Section 5.2.1 assumes an infinitely large network. Practically, the periodicities that can be captured by the network depend on the size of the network. Observe that the largest activation period that the first convolutional layer can capture is \(100\) since it has size \(201\times 5\), see Figure 4. Hence, since we have stride \(2\) the largest period the network can capture in terms of the input \(x\) is \(200\). Denote by \(k\) the period of the activation for Maxpooling and \(p\) the pool size. Then if \(p|k\) the output period is \(k/p\) and if \(p\nmid k\) then the output period is \(k\). Here, we have \(p=2\). Again, in a practical setting the size of the layer restricts the periods that can be captured. Here, the output activation period can be at most \(12\). The flatten layer will have periodic activation if the dropout layer also has periodic activation. Furthermore, if the activation matrix at the dropout layer is non-constant then the periodicity is increased by a multiplicative factor of \(5\) since the number of columns of the activation matrix is \(5\) which is prime. Our experiments indicate that activation at the flatten layer can vary if we vary over the period of \(x\), Figure 5 and Appendix A. The activation periodicity is lost in the last two dense layers. Hence, to study network periodicity we will focus on the periodicity at the flatten layer in relation to the periodicity of the sequence. Combining the results from this section with Lemma 7 we obtain the following heuristic statement: _for an untrained LKCNN we expect that for \(k\leq 96\) the period at the flatten layer is \(5k/2^{i}\) where \(i\) is the largest \(i\leq 3\) for which \(2^{i}|k\)._ ### Activation periodicity influences performance Generally, a periodic input implies a periodic activation at the flatten layer. We refer to the latter as the network period. For example, the \(2\) period orbit in Figure 5 has network period \(5\). Orbits with the same period map into a single network period if the orbit period is sufficiently small. Consequently, we represent this mapping by a binary matrix which identifies orbit periods to network periods, see Figure 6. We note that we could formulate these results in a non-binary setting where the orbit period is not uniquely mapped to a single network period. However, similar results still hold, see Appendix C. The binary matrix in Figure 6 is called the period matrix. Period matrices can differ per trained model. We consider 250 models trained over 1000 epochs with patience 50. We then classify the trained models using the period matrices. The largest class consist of 32% of trained models and has period matrix as depicted in (a)a which will be referred to as period matrix \(A\). Single classes in the complement of \(A\) make up at most 4% of all models. We consider these classes to be non-significant. We note that the great variety in period matrices is not in contradiction with Lemma 7 since the Lemma only gives an upper bound on the activation period of the convolutional layers. We observe that 56% of class \(A\) models converges to a local minimum which has accuracy between 55-60%, Figure (b)b. For the models in the complement Figure 5: Activation at flatten layer: We consider the first 15 nodes at the flatten layer for period 2 trajectories given by \(x_{1},x_{2}\). We observe that the periodicity is invariant for all three cases. Reversing the order alters the activation (a)(b). Position of the maximum activation, yellow, and minimal activation, purple, can vary (a)(b)(c). of class \(A\) we have that Q1 and Q3 are at 0.94 and 0.98 accuracy, respectively. Hence, models of class \(A\) have a negative effect on the overall performance. The period matrix \(A\) exactly corresponds to the theoretically determined periods for untrained networks, Section 5.2.2. For other period matrices the weights of the corresponding models reduce the network period. This appears to generally have a positive effect on the performance. Heuristically, we can argue that there is no benefit to preserve a property of the untrained network which is imposed on the network independent of data properties. Therefore, models with period matrix \(A\) underperform in relation to its complement. ## 6 Discussion In this work we consider the issue of classifying time sequences, discriminating regular from chaotic dynamics. To this purpose we compare and analyze a state-of-the-art approach based on Large Kernel Convolutional Neural Networks (LKCNNs) and a traditional Lyapunov reconstruction method. LKCNNs have a simple structure: few convolutional layers with peculiarly large filters connected to a dense layer. This structure simplifies numerical experiments and allowed us to identify relevant network features key for performance. Specifically, we have shown that to classify signals with high accuracy, LKCNNs use qualitative properties of the input sequence. This enables them to outperform direct Lyapunov Figure 6: Period matrix: On the left a period 2 orbit with flatten layer period 5 is depicted. For each periodic orbit we identify the orbit’s period to a period at the flatten layer. The period at the flatten layer is referred to as network period. Orbits with the same period do not uniquely map to a network period. However, in period range 2 to 48 the majority of period orbits map to a single network period. We associate a binary matrix to this identification. This period matrix can differ per trained model. However, we can identify classes of models which have the same period matrix exponent reconstruction methods. Here, we consider a reconstruction approach based on Savitsky-Golay polynomial filtering. LKCNNs higher classification accuracy strongly emerges as we consider sequences whose Lyapunov exponent is close to zero: the hardest to evaluate as the average rate of separation between consecutive time-steps is almost constant. We investigated the emerging connection between input periodicity and periodicity in the activation of the non-dense layers. We have shown this aspect to be paramount for performance. For this analysis, we have introduced the notion of a period matrix which is a two dimensional binary matrix which represents the mapping between periodic inputs and the network's activation excluding the dense layers. We considered generic untrained networks first. For these, we have determined theoretically the connection between the input period and the periodicity of the activation of the convolutional layers. Effectively, this yields a period mapping-type relation. This mapping is represented by the period matrix. We showed that for generic networks this period matrix is unique. The weights minimizing the loss function are meager within the weight space. Consequently, this period mapping needs not be preserved by training. Indeed, we observed that trained models can have a variety of period mappings. Nevertheless, a significant percentage of trained models have period matrix equal to that of generic untrained models. These models underperform. In other terms, we numerically verified how the period matrix is a feature correlating with performance. Heuristically, there is no benefit to preserve a property of the untrained network Figure 7: Models classified by period matrix: We investigated the period matrices associated to 250 random initialized trained models. In (a) we observe that the most common period matrix is period matrix \(A\) to which 32% of models converge. The second largest period matrix class is 4%. We compare the performance between models with period matrix \(A\) and models in their complement. In (b) we plot the density functions for the accuracy of both classes. We observe that models with period matrix \(A\) have on average lower performance. if it has no relation to the data. If a property of the data is reflected on the network we would expect an increase in performance. Models with high performance have period matrices featuring network periods which are lower compared to the case of a generic untrained network. Additionally, high performance is not a property of a singular period matrix. Two aspects remain outstanding: how the network training yields period reduction, and whether properties of the periodicity matrix correlating with high classification performance can be identified independently on the periodicity matrix of the untrained network. Possibly, the absence of clear connections indicates that restrictions on the period matrix adversely affect performance. Finally, here we focused on discrete dynamical systems. It would be relevant to formulate suitable experiments in a continuous setting. We note that our approach leverages on periodicity of trajectories. Hence, it could be applied to other discrete dynamical systems or settings where it is possible to construct Poincare maps. This paper illustrates how dynamical system problems can be solved using neural networks and how neural network problems can be understood using dynamical systems. Thanks to a neural network approach we could achieve higher performance than traditional methods at classifying chaotic from regular time series. On the other hand, an approach hinged on dynamical systems analysis has been key to understand the inner workings of the network. **Acknowledgments:** During this research Thomas de Jong was also affiliated to University of Groningen and Xiamen University. Many thanks to Alef Sterk for his helpful comments and literature recommendations. Also, many thanks to Klaas Huizenga for providing hardware during Thomas de Jong's stay in Groningen. This research was partially supported by JST CREST grant number JPMJCR2014. ## Appendix A Generalizations of 2-periods Using the method from Section 5.2 we can identify the activation at the flatten layer for a single orbit. We can consider all possible orbits and investigate to which activation they are mapped. For visualization purposes we will only consider period-2 orbits. Recall from Figure 2 that the log-spaced data set does not contain fixed points. We consider two trained models. For both models the period at the flatten layer is either period 1 or 5. This results in the bifurcation diagrams 8. In Figure 8 the first and second iterate of the period-2 orbit are on the \(x\)-axis and \(y\)-axis, respectively. We have identified period-2 orbits by the position of their zero activations at the flatten layer, e.g. an activation of \([0,0,0,1,0]\) has the same color as an activation of \([0,0,0,0.1,0]\) as their zeros have the same position. We observe that for all diagrams in Figure 8 the domains are bounded by linear functions of the form \(x_{2}=ax_{1}+b\) with \(a>0\). Specifically, if we consider the activations which are close to a period-2 orbit in the 2-norm we also obtain these linear functions, see Figure 9. In the bifurcations diagrams in Figure 7(a) and Figure 7(c) the activation classification is not preserved under changing the order of \(x_{1},x_{2}\). This order preservation is also not satisfied for the prediction, see Figure 7(b) and Figure 7(d). As expected the prediction is incorrect if the prediction is applied for period-2 orbits far away from the trained data. In Figure 7(c) there are 9 domains (excluding the yellow data domain). This means that there are different permutations of zero activation with the same number of zeros since the period at the flatten layer is Figure 8: Period-2 orbits bifurcation diagrams with predictions. The yellow line corresponds to period-2 orbits in the data. The other colors in (a)(b) correspond to period-2 orbits with zero-activation at the same position. at most 5. From Figure 8a we observe that this activation classification is not locally preserved in a neighborhood of the data since the yellow line borders the purple and red domain. Figure 9: Sub-domain of Figure 8a: The purple domain corresponds to a sub-domain of the purple domain in Figure 8a. This domain has activation close to the yellow data point in the 2-norm. Architecture optimization Our experiments show that the performance can be improved by adjusting the following two hyperparameters: * Stride in the two convolutional layers, * Number of nodes in the fully connected dense layer after the flatten layer. All hyperparameter variations exhibited instability when trained using the log-spaced data set. We resolved this by setting the learning rate to 0.000388. The experiments in Figure 10 concern the accuracy over the validation set of 50 models for 200 epochs per architecture. Model 0 is the LCKNN architecture used in [1]. Since most of the experiments concerns models trained with high accuracy we are interested in selecting the hyperparameters which can consistently train models with high accuracy. We selected model 1 since the accuracy range over Q2 to Q3 is the highest. Figure 10: Validation accuracy boxplots for different hyperparameters. Period matrices ### Period matrix classes We recall that period matrix class A, Figure 6(a) makes up 32% of the 250 trained models. In Figure 11 we visualize the 2nd, 3rd and 4th largest classes. ### Non-binary period matrices Recall that for low orbit periods we generally obtain a unique period of the network. Most high orbit periods map onto more than one network period. We normalize the period network over each period orbit by dividing by the total orbits of a specific period and then put the values in a matrix as before, see Figure 12. Figure 11: Period matrices: The percentage indicates the size of the class with respect to all the models In Figure 12 we visualized the 4 most common classes. Figure (a)a is the major class making up Figure (a)a. We also that the max over each column of Figure (c)c gives Figure (b)b. If we take the max over each column of Figure (b)b or Figure (d)d we do not get Figure (a)a or Figure (c)c. ## Appendix D Performance models non-periodic sine-circle data In Table 3 of Section 5.1 we presented the performance of LKCNN, Savitsky-Golay reconstruction of the Lyapunov exponent and short-time Lyapunov exponent on classifying chaos for non-periodic trajectories of the sine-circle map. For non-periodic trajectories the performance depends strongly on the decimal precision of the Lyapunov exponent. Here we consider sequences which have a Lyapunov exponent that converges in \(k\)-decimals and classify the sequence as chaotic using the first \(k\)-decimals. In Figure 13 we consider the performance as function of the convergence of the first \(k\)-decimals of the Lyapunov exponent. The three models have near perfect performance when we only consider 2-decimal convergence or restrict to the chaotic subset. We note that the qualita Figure 12: Non-binary period matrices: The percentage indicates the size of the class with respect to all the models tive observations of Table 3 also apply if we consider convergence in 3-decimals. Evaluating the Lyapunov exponent over regular non-periodic sequences we typically observe fluctuations around zero which makes the classification task more difficult which is clearly reflected when we compare the performance for decimal precision greater than 2. Observe that performance of short-time \(\lambda\) is non-monotone over the regular non-periodic subset. The sequences corresponding to \(k\)-decimal convergence are a subset of the sequences corresponding to \((k+1)\)-decimal convergence. This property is insufficient to conclude anything about monotonicty in the performance results. These subsets do decrease in size with increasing \(k\). For \(k=5\) the resulting subset is so small that the results are not reliable from a generalization perspective. ## Appendix E Regular non-periodic logistic map trajectories If we train the LKCNN on the sine-circle data set from Section 2.2 we obtain poor performance on the regular non-periodic set, see Table 2. This begs the question why we obtain such good performance on the logistic set for the regular non-periodic subset. It turns out that the majority of this subset is in a sense close to periodic trajectories of low period. We divide \(x\) in consecutive chunks of length \(k\) by defining \[y_{k}^{m}(x):=(x_{1+k(m-1)},\ldots,x_{1+km}).\] Figure 13: Performance models on non-periodic sine-circle data set as function of decimal convergence of Lyapunov exponent: We consider the performance of LKCNN, Savitsky-Golay reconstruction of the Lyapunov exponent (SG) and short-time \(\lambda\) (averaged Lyapunov exponent over the input sequence). The \(x\)-axis corresponds to a subset where the first \(k\)-decimals of the Lyapunov exponent converge. Hence, the classification will also be determined by the first \(k\)-decimals. All models have near perfect performance when we only consider 2-decimal convergence or restrict to the chaotic subset. We consider the error given by minimizing over \(k\) the average difference between consecutive \(y_{k}^{m}(x)\)-chunks: \[\text{period error}(x) :=\min_{k\leq\frac{n}{2}}\frac{1}{\lfloor n/k\rfloor}\sum_{m=1}^{ \lfloor n/k\rfloor}\big{|}y_{k}^{m-1}(x)-y_{k}^{m}(x)\big{|},\] \[K(x) :=\min\underset{k\leq\frac{n}{2}}{\text{argmin}}\frac{1}{\lfloor n /k\rfloor}\sum_{m=1}^{\lfloor n/k\rfloor}\big{|}y_{k}^{m-1}(x)-y_{k}^{m}(x) \big{|}.\] The period error measures how close \(x\in\mathbb{R}^{n}\) is to a periodic orbit. More specifically, we have that if \(\{\tilde{x}_{i}\}\) is a \(k\)-periodic sequence and \(x=(\tilde{x}_{1},\tilde{x}_{2},\cdots,\tilde{x}_{n})\) with \(n\geq 2k\) then period \(\text{error}(x)=0\) and \(K(x)=k\). Note that it would be computationally unfeasible to compare all chunks. However, for the computation we consider the minimum of the period error for the original and reversed sequence. For period \(\text{error}(x)<10^{-5}\) we consider the accuracy of non-periodic orbits which are labeled as regular. If \(K(x)=2^{k}\) for \(k<8\) the network scores high accuracy since the majority of periodic orbits in the data have period \(2^{k}\), see Figure 15. In Figure 14 we exclude \(K(x)=2^{k}\). We observe that only for non-periodic trajectories with small \(K(x)\) we obtain high-accuracy. Figure 14: Accuracy non-periodic trajectories logistic data: As the approximate period \(K(x)\) is increased the accuracy decreases. This gives evidence that the network can only classify these trajectories correctly if they are close to trajectories of sufficiently small period. An exception here are periods which are a power of 2. These periods have been excluded in the graph since the network has accuracy close to 1 on these subsets. This is to be expected since the majority of periodic trajectories have period \(2^{k}\)
2305.06344
Orthogonal Transforms in Neural Networks Amount to Effective Regularization
We consider applications of neural networks in nonlinear system identification and formulate a hypothesis that adjusting general network structure by incorporating frequency information or other known orthogonal transform, should result in an efficient neural network retaining its universal properties. We show that such a structure is a universal approximator and that using any orthogonal transform in a proposed way implies regularization during training by adjusting the learning rate of each parameter individually. We empirically show in particular, that such a structure, using the Fourier transform, outperforms equivalent models without orthogonality support.
Krzysztof Zając, Wojciech Sopot, Paweł Wachel
2023-05-10T17:52:33Z
http://arxiv.org/abs/2305.06344v2
# Frequency-Supported Neural Networks for Nonlinear Dynamical System Identification ###### Abstract Neural networks are a very general type of model capable of learning various relationships between multiple variables. One example of such relationships, particularly interesting in practice, is the input-output relation of nonlinear systems, which has a multitude of applications. Studying models capable of estimating such relation is a broad discipline with numerous theoretical and practical results. Neural networks are very general, but multiple special cases exist, including convolutional neural networks and recurrent neural networks, which are adjusted for specific applications, which are image and sequence processing respectively. We formulate a hypothesis that adjusting general network structure by incorporating frequency information into it should result in a network specifically well suited to nonlinear system identification. Moreover, we show that it is possible to add this frequency information without the loss of generality from a theoretical perspective. We call this new structure _Frequency-Supported Neural Network_ (FSNN) and empirically investigate its properties. Neural Networks Nonlinear Dynamics Fourier Transform System Identification ## 1 Introduction Among different tasks of modern system modelling and identification, one can consider the problem of estimating the system's output, conditioned on its past values and excitation. In this context, one can further focus on two classes of problems: _simulation modelling_ and _predictive modelling_Schoukens and Ljung (2019). _Simulation modelling_ is a task in which outputs are predicted based only on the input signal, while _prediction modelling_ requires measurements of the system trajectory to predict future states (hybrid approaches are also possible). This paper considers simulation modelling, which can be framed as an optimization problem of \(n\)-step ahead prediction based on \(m\) samples of the input sequence. The training dataset contains measurements of input and output signals from the system, which are grouped into windows of fixed length. The number of methods developed for the identification of nonlinear systems is very large. Some of them include domain knowledge Hjalmarsson and Schoukens (2004), Schoukens et al. (2014), Ljung et al. (2004), while others are very general and require only very mild assumptions about the nature of modelled systems Sliwinski et al. (2017), Tanaka et al. (2019), Geneva and Zabaras (2020). One of the most successful classes of models applied to the identification of nonlinear dynamics are neural networks Ribeiro et al. (2020), Andersson et al. (2019), Geneva and Zabaras (2022). Those models are very general structures, capable of modelling many types of input-output relationships without major changes to the architecture. Nevertheless, even given the generality of the network structure, many specialized networks have been developed specifically for nonlinear dynamics and usually achieve better results than the less specialized structures Forgione and Piga (2021), Beinetema and Toth (2021). Often those specialized networks are built using general _a priori_ knowledge about the nature of the system; sometimes, they have knowledge about physical equations baked into the architecture Karniadakis et al. (2021). One of the causes of the good performance of specialized models in specific cases is that those networks have useful inductive _a priori_ knowledge about the task added during the architecture development processes. Those biases can be very general, such as translation invariance for convolutional networks or causality and memory introduced in recurrent neural networks, particularly long-short term memory network Hochreiter and Schmidhuber (1997) and gated-recurrent unit Chung et al. (2014). Dynamical system identification, as a sequence modelling task, has a number of similarities to natural language processing. However, the inductive biases are different for both problems. In language, positional information is much more relevant than in dynamical systems modelling. In turn, for the dynamical system, it is possible to derive useful biases from physical insight. Our hypothesis is that adding frequency information to the network's structure will be useful inductive knowledge for dynamical system identification in a simulation setting. Similar approach was used to derive Fourier neural operators, which achieve good performance in fluid dynamics using predictive modelling Li et al. (2021), while still being very general and applicable to various kinds of physical or engineering systems. This hypothesis leads to a formulation of _frequency-supported neural networks_ (F8NN), which are neural structures designed for simulation modelling task 1. They processes system input signal in parallel in time and frequency domain. Both of those branches are shown to be universal approximators, so such a structure also retains this property originally proved for feedforward network. This type of network is capable of successful identification, which is verified on two widely used benchmarks and a toy problem. Moreover, experiments show that adding this frequency structure gives the model an edge over a similar model without this additional information. F8NNs are particularly useful for input signals, which can be decomposed into a finite number of frequencies. Intuitively, this can be attributed to the fact that such a model requires only a single parameter to represent a pure-tone wave. Footnote 1: Implementation is available at [https://github.com/kzajac97/frequency-supported-neural-networks](https://github.com/kzajac97/frequency-supported-neural-networks) ## 2 Frequency-Supported Neural Network The novel structure of frequency-supported neural networks introduced in the paper is an extension of the multi-layered perceptron, designed to incorporate frequency information about the input signals using Fourier transform. The network is a layered structure consisting of blocks, which can be arranged in any way, as long as the correct length of the input and output is preserved. F8NN block is a building block of such a network and can be arranged into a deep network or combined with other blocks. The input of the F8NN block is a sequence, which can be a series of measurements of a dynamical system or some other time-dependent variable. Its output is also a sequence, though its length does not need to be preserved. In general, the input can be multi-dimensional (as well as the output) and transformations between dimensions are possible using this type of network. ### F8NN Block The proposed F8NN block consists of two parallel branches. One operates in the time domain and is simply a linear block processing the input sequence, whereas the other one is designed to focus on frequency space. Outputs of both branches are added together and passed through a non-linear activation function \(\sigma\). Any of the commonly used functions could be used, including _Sigmoid_, _ReLU_ or _Hyperbolic Tangent_ Dubey et al. (2022). Each F8NN block has four hyper-parameters: the length of the _input_ sequence, \(T_{i}\), the length of the _output_ sequence it produces, \(T_{o}\), and the input and output dimensionalities, \(D_{i}\) and \(D_{o}\), respectively. This allows adjusting the model to different types of dynamical systems, even those with a high number of state dimensions. The output of each time-branch can be expressed by equation (1) \[\hat{h}_{l}=xW_{l}^{\intercal}+b_{l}, \tag{1}\] where \(x\in\mathbb{R}^{T_{i}}\) is a row vector with length \(T_{i}\), and \(W_{l}\) is a matrix of learnable parameters with shape \([T_{o}\times T_{i}]\), and \(b_{l}\) is a bias vector with length \(T_{o}\). The output \(\hat{h}_{l}\in\mathbb{R}^{T_{o}}\) is a row vector with length \(T_{o}\). Elements of the time branch are denoted with subscript \(l\) to distinguish them from the elements of the frequency branch, denoted with subscript \(f\). The frequency block uses Fourier transform to convert the signal to frequency space, applies learned linear transformation in this space and then converts back to the original domain. Real-valued Fourier transform is used Sorensen et al. [1987], so only the positive side of the spectrum is processed due to the assumption that such a model will process only real-valued signals. Due to the FFT algorithm used in implementation, it is most efficient on sequence lengths being a power of 2, _cf._ Brigham and Morrow [1967]. Clearly, the learned parameters are complex-valued in this branch. The equations defining this branch are given by eq. (2) \[\hat{h}_{f}=\mathcal{F}^{-1}\left(\mathcal{F}(x)W_{f}^{\intercal}+b_{f}\right), \tag{2}\] where \(\mathcal{F}\) denotes the Fourier transform applied to the signal, and \(\mathcal{F}^{-1}\) is its inverse. The matrix of parameters \(W_{f}\) is complex-valued with shape \([1/2\,T_{o}\times 1/2\,T_{i}]\). It is smaller than in the time branch since the length of \(x\) vector after applying the real-valued Fourier transform is half of its original length. Similarly, \(b_{f}\) is a complex-valued vector of parameters with length \(1/2\,T_{o}\). The output of each FNN block is computed as the sum of representations produced by both branches with a nonlinear activation function \(\sigma\), which is expressed by eq: (3) \[\hat{y}=\sigma(\hat{h}_{f}+\hat{h}_{t}). \tag{3}\] The structure of the FNN can be extended to the system with multiple-input multiple-output (MIMO) systems by extending the parameter matrices and using reshape operation. The input in the MIMO case is a matrix with shape \([T_{i}\times D_{i}]\), which is converted into a column vector of length \(T_{i}D_{i}\), and the produced output is another matrix with shape \([T_{o}\times D_{o}]\). The matrices \(W_{l}\) and \(W_{f}\) are extended to shapes \([T_{o}D_{o}\times T_{i}D_{i}]\) and \([1/2\,T_{o}D_{o}\times 1/2\,T_{i}D_{i}]\), respectively. Bias vectors are also extended to contain the required parameters; \(T_{o}D_{o}\) for the time branch and \(1/2\,T_{o}D_{o}\) for the frequency branch. The output of the FNN block in such a case is a vector with length \(T_{o}D_{o}\), which can be rearranged into a matrix with the desired shape. The rearranging can be done before or after the aggregation and activation function because they are both element-wise operations. ### \(N\)-Step Ahead Prediction The requirement for using FNN is the availability of a training dataset, which needs to be composed of input and output measurements of the system of interest aligned in time with constant sampling between measurements. Moreover, the model structure requires the input and output to be of the known and constant length, which can be different. To simulate a dynamical system using this approach, the input signal needs to be split into windows, for which outputs will be predicted, where each window of inputs corresponds to a single window of outputs. In practice, short output windows and long input windows are most efficient, which is intuitively clear since longer input gives the model more information, and shorter output makes error accumulation smaller. It is also possible to have different lengths in a sequential FNN model, where the only requirement is that the output length of a given layer must match the input of the following layer. In the experiments, time windows were created using two parameters, \(m\) for input length and \(n\) for output length and time index \(t\), which was moved along the sequence of sampling times \(T\). During training, the overlap is possible and sometimes useful so that the same measurement can appear in different parts of the input or target sequence. \[\{(U_{(t-m):t},Y_{(t-n):t})\quad|\quad t\in T\}. \tag{4}\] Creating a training dataset as described in (4) allows formulating a training procedure as a regression task, computing the mean-squared-error between model predictions and measured targets and optimizing the parameters using stochastic Figure 1: Schematic representation of a single FNN block structure gradient descent or one of its variants Ruder (2017). \(U\) and \(Y\) in eq. (4) denote measurements of input signal and system output, respectively, which are indexed with the measurement time \(t\), where the measurements of input signal \(u(t)\) as the input to the first layer. For multi-dimensional systems, multiple vectors for excitation or output measurements can be included in the training dataset, all aligned using the same time index. ## 3 Theoretical Properties Feedforward neural networks are known to have universal approximation property, which was originally proven in Cybenko (1989) and extended in particular in Hornik et al. (1989, 1990); Stinchcombe and White (1989). Informally, the universal approximation property guarantees that for any \(n\)-dimensional function \(f\) from a given space, there exists a feedforward neural network, \(G(x)\), of the form given in (5), such that \(|G(x)-f(x)|<\epsilon\) for arbitrarily small \(\epsilon>0\). In the case of simulation modelling of dynamical systems, the input to the network, \(x\in\mathbb{R}^{N}\), is a input signal with finite time steps. To guarantee this property network the is required to have an infinite number of neurons (also called units) in the hidden layer. This representation allows showing that both time and frequency branches of FNN block have universal approximation properties. For the time branch (_i.e._ FSNN with \(\hat{h}_{f}\equiv 0\)), this is straightforward, as the general form given in equation (5) expresses the same computation as the time branch \[G(x)=\sigma(xW^{\mathsf{T}}+b)W_{s}+b_{s}. \tag{5}\] The learned parameters are in the hidden layer of the network, while \(W_{s}\) and \(b_{s}\) are additional readout parameters, which are also present in the original formulation Cybenko (1989). ### DFT Matrix To show that not only the time branch of FSNN is a universal approximator but also the frequency branch (_i.e._ FSNN with \(\hat{h}_{l}\equiv 0\)), discrete Fourier transform in matrix representation needs to be used Winograd (1978); Serbes and Durak-Ata (2011). In the derivation of FSNN-block, continuous-time Fourier transform was used, but in practical implementation the number of time steps is always finite. This allows using Fourier transform written in matrix notation, which is given in (6), where \(\omega=e^{-2\pi i/N}\) \[F=\frac{1}{\sqrt{N}}\begin{bmatrix}1&1&1&\cdots&1\\ 1&\omega&\omega^{2}&\cdots&\omega^{N-1}\\ 1&\omega^{2}&\omega^{4}&\cdots&\omega^{2(N-1)}\\ \vdots&\vdots&\vdots&\ddots&\vdots\\ 1&\omega^{N-1}&\omega^{2(N-1)}&\cdots&\omega^{(N-1)(N-1)}\end{bmatrix}. \tag{6}\] Multiplying a vector of length \(N\) by this matrix is equivalent to computing \(N\)-point discrete Fourier transform. The inverse of \(F\) corresponds to the inverse discrete Fourier transform, and its matrix form is guaranteed to exist since \(F\) is unitary Todd and Weisstein. ### Frequency Branch as Universal Approximator The frequency-branch (_i.e._ FSNN with \(\hat{h}_{l}\equiv 0\)), is a universal approximator, which can be shown based on Cybenko's proof since it is possible to write it in a way equivalent to \(G(x)\) network structure, utilizing the properties of DFT matrix, given above. The general form of frequency branch in matrix notation is given by equation (7). Given that parameters can take any value, it is possible to find matrix \(W_{f}\) and complementing bias vector \(b_{f}\), such that general form (7) is equivalent to universal feedforward network (5) \[G_{F}(x)=\sigma\left(F^{-1}(FxW^{\mathsf{T}}+b)\right)W_{s}^{\mathsf{T}}+b_{s}. \tag{7}\] Those values can be computed analytically and they are presented in equations (8). This property only holds for square matrices, since \(F\) is always square by definition. After plugging in those values, the frequency branch has all the properties of a feed-forward network and from this point of view, using Fourier transform and inverse Fourier transform is effectively a form of initialization. \[W_{f} = FW^{\mathsf{T}}F^{-1} \tag{8}\] \[b_{f} = bF^{-1} \tag{9}\] ## 4 Numerical Experiments Three numerical experiments were run to verify the hypothesis of the FSNN model. One consists of a toy problem with a static system, while the other two were benchmarks selected from system identification literature. Three core models were developed, for which a large grid search over the parameters was conducted. Those models were: FSNN, which is the model described in the previous section, consisting of a number of FSNN blocks stacked together. FMLP, stands for feedforward network consisting of frequency blocks, which is a subset of the FSNN architecture, where \(\hat{h}_{l}\equiv 0\). The final model was a regular feedforward network processing the signal using delayed input measurements, which also is a subset of FSNN with \(\hat{h}_{f}\equiv 0\). Additionally, selected state-of-art models were re-implemented and run on the same benchmark problems, or when available, the results were transferred from original papers. ### Hyperparameter Search For all benchmarks and the three core architectures (FSNN, FMLP and MLP) random search over a defined set of hyper-parameters was run, and later full grid search was run over the subset of hyperparameters, which were important for the model. Searched parameters were the following: number of input samples and number of predicted samples, number of hidden layers, and number of units in all layers. Some hyper-parameters were frozen and used for all models, such as the optimization algorithm _Adam_, Diederik and Ba (2017), and GeLU activation function, _cf_. Hendrycks and Gimpel (2020). The most important parameter is the number of input samples, especially for the models utilizing frequency information, since for shorter windows the amount of information about signal frequencies is lower, which makes it less useful. A smaller number of output samples effectively means the model needs to predict fewer time steps. A one-step-ahead prediction makes it usually more accurate, so the optimal value for all benchmarks was equal to one. However, using longer output windows could also be used effectively, especially when fast predictions are required by the application. For the DynoNet model Forgione and Piga (2021), reported values are a reproduction of the model using original code on different benchmarks, with the exception of the Wiener-Hammerstein system where the value from the original paper is reported. For the State Space Encoder reported results are also values from the original work Beintema and Toth (2021), since reproducing the model was not possible due to very long training runtimes. ### Evaluation All the above algorithms were evaluated using root mean squared error (RMSE), which is a standard method of evaluation in regression problems. Physical units are added were possible. RMSE was computed as \[RMSE(y,\hat{y})=\sqrt{MSE(y,\hat{y})}=\sqrt{\frac{1}{N}\sum_{i=1}^{N}(y-\hat{y })^{2}}. \tag{10}\] Additionally, a normalized mean squared error was also evaluated, _i.e._ the ratio of RMSE and standard deviation of the predicted value (reported as a percentage) \[NRMSE(y,\hat{y})=\frac{RMSE(y,\hat{y})}{\sigma_{y}}. \tag{11}\] ### Static System with Frequency Input A simple static affine system with a input signal consisting of pure-tone sine waves was created to test FSNN structure, under conditions most suitable for it. The input signal was generated using a sum of frequencies drawn from uniform distribution: \(\alpha\sim\mathcal{U}(-5,5)\), which is given by eq. (12) \[u(t)=\sum_{i=1}^{5}sin(\alpha_{i}t). \tag{12}\] The output of the system was generated using two additional parameters, randomly drawn from the uniform distribution \(\mathcal{U}(-5,5)\), similar to the excitation frequencies. Those are denoted as \(\beta_{1},\beta_{2}\) in \[f(u)=\beta_{1}u(t)+\pi\beta_{2}. \tag{13}\] Results for this benchmark show the advantage of a two-branch structure over single-branch MLP and FNN models, where FSNN is able to achieve much better results, which are summarized in table 1. During the experiments, the best-performing models were those with a low number of parameters, but larger models were also capable of achieving satisfactory results. DynoNet model achieving best results on this benchmark had only one static layer, without the learnable dynamical operator, which also effectively made it a feed-forward network. However, the architecture is different than MLP, and the model could be easily obtained by performing hyperparameters search on the DynoNet model. Moreover, the DynoNet model is constructed in a way allowing to easily infer the structure of the architecture given knowledge about the system since it can be decomposed into linear dynamical blocks and static nonlinearities Forgione and Piga (2021). This allows for selecting good candidate architecture, however, such knowledge is not guaranteed in real-world situations. ### Wiener-Hammerstein Benchmark Wiener-Hammerstein benchmark is a well-known benchmark problem for the identification of nonlinear dynamics. It consists of two linear blocks and a static non-linearity, which were implemented using an electronic RLC circuit with a diode, _cf._ Schoukens et al. (2009). The measurements of this system are used to create training and test datasets for the model. The results achieved by FSNN are not state-of-art, however, the structure performs significantly better than plain MLP network on real-world data. For FSNN this performance was improved with longer input sequences, which allowed the model to access more frequency information. Evaluation results are reported in table 2 and for selected model on figure 3. For the MLP model, good results can be achieved with a wide range of hyper-parameters. The two reported results are on the two extremes of model size with a number of parameters different by three orders of magnitude, but have very similar performances. Both FSNN and MLP models are capable of achieving lower simulation error than DynoNet model while having substantially fewer assumptions about the nature of modelled data. Additionally, it is worth noting that all listed models have normalized simulation error lower than \(1\%\), which would be sufficient for most practical situations. \begin{table} \begin{tabular}{c c c c} \hline \hline Model & \(\#P\) & RMSE & NRMSE \\ \hline FNN & 1610 & 10.3 \(\cdot 10^{-3}\) & 0.30\% \\ MLP & 2157 & 5.4 \(\cdot 10^{-3}\) & 0.16\% \\ FSNN & 887 & 1.4 \(\cdot 10^{-3}\) & 0.04\% \\ DynoNet & 49 & 0.3 \(\cdot 10^{-3}\) & 0.02\% \\ \hline \hline \end{tabular} \(\#P\) number of parameters \end{table} Table 1: Evaluation results for selected models on test dataset for a static affine system with frequency input ### Silverbox Benchmark Silverbox benchmark is an electronic implementation of the Duffing oscillator, which can be modelled using a second-order LTI system with a polynomial nonlinearity in the feedback Wigren and Schoukens (2013). This type of system is challenging due to its nonlinearity. Experiments performed using this benchmark were conducted in the same way as all the others. The structure of models applied to this benchmark cannot reflect the polynomial nonlinearity, which causes larger simulation errors when compared to the Wiener-Hammerstein benchmark. Moreover, models performing well on this benchmark tend to be larger then in previous cases, which also could be attributed to this nonlinearity. Results are reported in table 4.5. ## 5 Discussion Concluding, the presented architecture is capable of successfully modelling static, linear and nonlinear dynamics with almost no assumptions about the nature of the data it is trained on. From the theoretical point of view, it can be also interpreted as an initialization scheme for feedforward neural networks. The following conclusions can be drawn from our work: * Frequency information is a useful feature for feedforward neural networks modelling of nonlinear dynamical systems. It is, however, most successful when used together with a plain feedforward network with a delay line in a branched structure. \begin{table} \begin{tabular}{l c c c} \hline \hline Model & \(\#P\) & RMSE & NRMSE \\ \hline FNN & 7856 & 1.9 mV & 0.78\% \\ DynoNet & 63 & 1.2 mV & 0.50\% \\ MLP & 1193 & 1.1 mV & 0.46\% \\ Large MLP & 1379841 & 0.9 mV & 0.38\% \\ FNNN & 1591 & 0.5 mV & 0.22\% \\ State-Space Encoder & 21410 & 0.2 mV & 0.10\% \\ \hline \hline \end{tabular} \(\#P\) number of parameters \end{table} Table 2: Evaluation results for selected models on test dataset for Wiener-Hammerstein benchmark compared to selected results reported in literature Figure 2: Simulation error computed for best FNN model on the test dataset for a static affine system with frequency input * There is no easy-to-see dependency between the number of trained parameters in feedforward models and their performance on benchmarks consisting of engineering systems. In our experiments, smaller models tend to perform marginally better, which could be attributed to a greater number of updates they can make to their parameters in a limited training time. * FSNN structure is capable of achieving good results on a number of benchmarks, and it shows relatively low sensitivity to changes in hyperparameters, which makes it potentially a good candidate for practical applications in which running large hyperparameters searches for the model is often impossible. * Adding orthogonal transforms to neural networks is a potentially interesting area of research, where those transforms could be used as initialisation or regularization methods, using _a priori_ knowledge about the functional basis particularly useful for a given problem. For example, in applications involving audio processing, the frequency domain is potentially useful for trainable models.
2302.05832
Sparse Mutation Decompositions: Fine Tuning Deep Neural Networks with Subspace Evolution
Neuroevolution is a promising area of research that combines evolutionary algorithms with neural networks. A popular subclass of neuroevolutionary methods, called evolution strategies, relies on dense noise perturbations to mutate networks, which can be sample inefficient and challenging for large models with millions of parameters. We introduce an approach to alleviating this problem by decomposing dense mutations into low-dimensional subspaces. Restricting mutations in this way can significantly reduce variance as networks can handle stronger perturbations while maintaining performance, which enables a more controlled and targeted evolution of deep networks. This approach is uniquely effective for the task of fine tuning pre-trained models, which is an increasingly valuable area of research as networks continue to scale in size and open source models become more widely available. Furthermore, we show how this work naturally connects to ensemble learning where sparse mutations encourage diversity among children such that their combined predictions can reliably improve performance. We conduct the first large scale exploration of neuroevolutionary fine tuning and ensembling on the notoriously difficult ImageNet dataset, where we see small generalization improvements with only a single evolutionary generation using nearly a dozen different deep neural network architectures.
Tim Whitaker, Darrell Whitley
2023-02-12T01:27:26Z
http://arxiv.org/abs/2302.05832v1
# Sparse Mutation Decompositions: Fine Tuning Deep Neural Networks with Subspace Evolution ###### Abstract Neuroevolution is a promising area of research that combines evolutionary algorithms with neural networks. A popular subclass of neuroevolutionary methods, called evolution strategies, relies on dense noise perturbations to mutate networks, which can be sample inefficient and challenging for large models with millions of parameters. We introduce an approach to alleviating this problem by decomposing dense mutations into low-dimensional subspaces. Restricting mutations in this way can significantly reduce variance as networks can handle stronger perturbations while maintaining performance, which enables a more controlled and targeted evolution of deep networks. This approach is uniquely effective for the task of fine tuning pre-trained models, which is an increasingly valuable area of research as networks continue to scale in size and open source models become more widely available. Furthermore, we show how this work naturally connects to ensemble learning where sparse mutations encourage diversity among children such that their combined predictions can reliably improve performance. We conduct the first large scale exploration of neuroevolutionary fine tuning and ensembling on the notoriously difficult ImageNet dataset, where we see small generalization improvements with only a single evolutionary generation using nearly a dozen different deep neural network architectures. ## I Introduction Neuroevolutionary methods evolve populations of models using biologically inspired concepts like natural selection and mutation. A core component of many of these methods is the generation of new networks through random noise perturbations. However, as neural networks continue to grow in size, the effect of noise perturbations on network behavior becomes increasingly pronounced. There is often a critical region of mutation where too little of a change does not provide any meaningful exploration, and too significant of a change leads to complete performance collapse. The optimal mutation window can be vanishingly small with modern deep neural networks that contain millions or billions of parameters and different network architectures, layer configurations, and training optimizers can lead to significant differences in parameter sensitivity. Investigating techniques to make mutations more effective and tractable for deep neural networks is important for opening new avenues of neuroevolutionary research. Due to the difficulty in mutating these large models, neuroevolutionary methods have primarily seen success with tasks that can be solved by relatively small networks [13]. This is compounded by the fact that these methods tend to be sample inefficient on supervised learning problems with well defined gradients. Hybrid gradient/evolutionary methods have thus grown in popularity as gradient optimization can be used to rapidly train the model while evolutionary processes can be implemented to aid in exploration and fine grained convergence [8]. This idea naturally extends to the task of fine tuning and optimizing large models that have been pre-trained with gradient descent. This is especially important as networks and labeled datasets continue to scale in size which can make training from scratch prohibitively expensive. Pre-trained and open source models are becoming more widely available, which makes methods for improving them increasingly valuable. In order to alleviate the challenges of mutating deep networks, we introduce Sparse Mutation Decompositions as a method for breaking up dense mutations into low-dimensional subspaces. While sparse mutations have long been used in other areas of evolutionary and genetic programming, they have rarely been explored with the popular evolution strategy based approaches [1, 14, 16, 31]. This is likely due to the perceived sample inefficiency associated with only updating small numbers of parameters at a time, however we find that this can actually be desirable for the task of mutating pre-trained models. Reducing the dimensionality of noise perturbations widens the critical mutation window, which significantly reduces variance as networks can handle stronger perturbations before performance collapse. We also explore the notion of static and dynamic subspace evolution in which mutations are restricted to the same or different subspaces for each child. Our work naturally connects to ensemble learning where we explore how stronger but more sparse mutations can encourage diversity among children such that the combined predictions of a mutated population can reliably improve generalization performance. Along with several ablation studies and visualizations designed to gain insight into the interplay between mutation strength and sparsity, we introduce the first large scale exploration of neuroevolutionary fine tuning with sparse mutations on the notoriously difficult ImageNet dataset with nearly a dozen deep convolutional network architectures. Using ensembles of mutated populations results in monotonic generalization improvements of up to 0.5% using only a single evolutionary generation and with no additional training. ## II Background ### _Neuroevolution_ Neuroevolution has long been a promising area of research in the machine learning community as evolutionary algorithms offer an elegant approach to optimizing neural networks by utilizing natural and biological metaphors like natural selection, mutation, genetics and reproduction [33]. These methods typically employ generational loops where populations of members are spawned from some parent(s) using evolutionary operators and evaluated for their fitness on a validation set. The best models are then selected and used as parents for the next generation. Evolution Strategies (ES) are the most popular subclass of neuroevolutionary methods that use only selection and noise mutations to optimize weights [31]. These methods track the mean and standard deviation of parameters modeled by a Gaussian distribution and offspring are generated by sampling from this distribution. The subset with the highest fitness scores are selected and the distribution's mean and standard deviation are updated according to the parameters of the elite subset. Recent work from Open AI has shown that a wide scaling of simple evolution strategies can be incredibly powerful on difficult reinforcement learning tasks [28]. Covariance Matrix Adaptation Evolution Strategies (CMA-ES) improves on ES by tracking dependencies between member parameters in a population with a pairwise covariance matrix. The standard deviation can then control for the overall scale of the distribution, greatly improving exploration [14]. However, despite the exploratory power of this approach, constructing the pairwise covariance matrix is computationally expensive and impractical for large networks. These approaches to optimization have several advantages over gradient based methods that are typically used in deep learning. They are inherently scalable as population members can be evaluated independently and they excel at exploring landscapes where gradient information is noisy, flat, or unavailable [28]. This is common in reinforcement learning environments where reward information can be sparse or dynamic. Unfortunately, evolutionary methods tend to be very sample inefficient on supervised learning problems. Hybrid gradient/evolutionary methods are able to take advantage of the efficiency of gradient methods with the exploratory power of evolutionary methods [7, 16]. Sparse Evolutionary Training is one related approach that uses alternating phases where evolutionary algorithms are used to determine which subnetwork to train during a given phase with gradient descent [26]. Neuroevolutionary methods are also known for exploring the topological space of networks [29]. In these methods, the network architecture itself is evolved as well as the weights. This can be an effective way to encourage diversity among population members as different network structures force unique representations through their structure. Sparse mutations hold an interesting connection to this line of work as there is growing interest into the nature of subnetwork behavior in trained models [7, 12]. Several researcher have noted the natural connection between evolutionary populations and ensemble learning [2, 24, 37]. Ensembles of diverse and accurate models consistently improve upon generalization performance as using multiple predictions can help to reduce the bias or variance associated with making predictions using a single model [6, 9]. ### _Model Tuning_ Much of deep learning optimization research is focused on methods for training neural networks well from scratch. However, as neural networks continue to scale in size, the cost for training these large networks from scratch becomes expensive. As open source and pre-trained models are becoming more widely available, investigating how we can take trained models and improve them further becomes a valuable area of research. Transfer learning is one line of work that has popularized the importance of utilizing pre-trained model. This generalized Fig. 1: Neuroevolution excels at fine tuning fully-trained networks that get stuck in flat loss basins. The left graph displays a typical SGD training trajectory where models tend to converge to the edges of flat optima [15]. The middle graph displays white dots as child networks that are generated from sparse mutations. The right graph shows the final ensemble consisting of top candidates selected from evaluation on a separate validation set. A key insight to the success of gradient-free methods for fine tuning is that the loss landscapes of test distributions rarely match the training distribution exactly. Ensembling over wide areas of good validation performance is key to improving generalization. the notion of using a network trained on one task or dataset (usually a much larger and general set like ImageNet) and then tuning it on another [4, 32]. Since the original model is fully trained, the model tends to converge much quicker on the new task than it would have taken if trained from scratch. Several researchers have also explored freezing early weights in the network during the tuning phase, as early layers in deep networks tend to learn general patterns and features that don't necessarily need to be retrained [39]. This is naturally connected to our approach where large numbers of parameters are kept frozen in order to maintain behavior while sparse subnetworks are mutated. We generalize the term fine tuning to refer to the continued training of any optimized model. It is most popularly connected to network pruning literature that refers to the final process of training a sparse network after parameters are removed from a dense network. Pruning and fine tuning is a well known technique for compressing model sizes where networks can be made significantly smaller with little to no loss in generalization performance [2, 21, 23]. These small subnetworks hold a signficant amount of classification power which suggest that limiting mutations to sparse subnetworks can have a meaningful impact on network behavior. Recent optimization research has investigated the convergence behavior of deep networks in the final epochs of optimization. It is thought that neural networks that converge to wide and flat optima in the optimization landscape tend to generalize better than minima that are sharp [36]. With gradient descent, deep networks generally converge to the edges of these wide and flat optima. However, the edges of these minima rarely correspond to the minima of the test distribution which is more likely to exist somewhere in the middle of these optima. Since these optima are wide and flat, there is little gradient information that can be used to nudge the model towards those locations. Stochastic Weight Averaging (SWA) is a method that leverages this insight to fine tune models in the final epochs of training [15]. By using a repeating cyclic schedule, large learning rates are used to jump the model to new locations around the minima before converging with small learning rates. This is repeated several times where the weights of the model at each convergence location are saved and eventually averaged together. Model Soups operate by using a very similar principle to Stochastic Weight Averaging [34]. Instead of using a single model and a repeating cyclic learning rate schedule, Model Soups instead train a pseudo-ensemble where they fine-tune several clones of a trained model, each with a unique set of hyperparameters. The fine-tuned models tend to converge to the same loss basin but in unique locations, and the weights of each model are then averaged together as in Stochastic Weight Averaging. Both Stochastic Weight Averaging and Models Soups provide empirical evidence for the theoretical ideas behind our approach with neuroevolutionary tuning. Figure 1 illustrates this with an example about how evolution can be used effectively in situations where models trained with gradient descent get stuck in some flat minima. The generation of child networks using sparse mutations provides enough diversity to explore the local landscape where ensembling over these wide areas of good validation performance is effective for reliably increasing generalization. ## III Sparse Mutation Decompositions Neuroevolutionary methods typically generate child networks by applying dense noise perturbations to all the weights of a given parent network. Some methods have suggested rescaling mutations according to weight magnitudes or output gradients [22]. Sparse Mutation Decompositions instead restrict mutations to a small subset of parameters, which can allow for a more targeted and controlled evolution of deep neural networks. This can greatly widen the critical mutation range, in that we can apply stronger mutations to achieve more meaningful diversity before experiencing performance collapse that dense mutations would cause. The methods introduced here are general and applicable to a wide variety of network architectures and optimization algorithms. ### _Noise Perturbations_ Mutations are usually implemented by perturbing the weights of a parent model \(\theta\in\mathbb{R}^{w}\), where \(w\) is the number of weights, with a random noise vector \(N\in\mathbb{R}^{w}\), sampled from a Gaussian distribution \(\mathcal{N}(\sigma,\mu)\). The sparse mutation \(\gamma\) is implemented by taking the Hadamard product \(\circ\) of a binary bitmask \(M\in\mathbb{B}^{w}\) with the sampled noise vector \(N\). \[M =\{1,0\}^{w}\] \[N \sim\mathcal{N}_{w}(\mu,\sigma^{2})\] \[\gamma =N\circ M\] When generating a population of children, there is a distinction to be made between whether mutations are restricted to the same subspace for each child or mutations are applied to random subspaces for each child. We call these two approaches Static and Dynamic Subspace evolution. Restricting all children to a single subspace may be more efficient for converging to an optima, while having each child mutate a unique subspace may lead to better diversity and exploration. We explore the differences between these two methods in our mutation ablation experiments in section IV. In the context of fine tuning pre-trained models, there is little difference between static and dynamic subspaces. More research could investigate the efficacy of these two approaches with models that are trained from scratch. ### _Hyperparameter Search_ A significant challenge in these approaches is to determine the appropriate values for both the noise distribution as well as the amount of sparsity in the bitmask. Unfortunately, this is challenging to determine before hand due to significant differences between the size and complexity of datasets, model architectures, optimization hyperparameters, etc. One approach is to measure the differences between the outputs of a network before and after mutations are applied. For example, given a network \(F\) that is parameterized by \(\theta\) with \(O\) outputs and a network that is perturbed with a noise mutation \(\gamma\), the mean squared difference over a set of \(N\) total input samples \(X\) can be described as: \[MSE=\frac{1}{N}\sum_{n=1}^{N}\sum_{o=1}^{O}(F(X_{n};\theta)_{o}-F(X_{n};\theta+ \gamma)_{o})^{2}\] However, since classification networks are often trained with cross entropy loss where the outputs are treated as a probability distribution, mean squared error may not be the best fit for approximating the effect that perturbation might have due to potential variance. In this case, we suggest that Kullback-Leibler Divergence, or relative entropy, is a better fit for approximately measuring how different the two output distributions are. \[KL=\frac{1}{N}\sum_{n=1}^{N}F(X_{n};\theta)\log\left(\frac{F(X_{n};\theta)}{F( X_{n};\theta+\gamma)}\right)\] This approach then allows for a general measure of comparison that works regardless of network architecture or noise distribution. A simple search algorithm can then try out several values for both mutation strength and sparsity in order to target a divergence score that prioritizes either exploration or accuracy. One nice property revealed in our experiments is that there is a roughly linear relationship between KL divergence and accuracy degradation in trained models. This insight can be used to tweak mutation parameters according to problem complexity, population size, or parameter sensitivity where a larger KL target can allow for stronger mutations and better exploration at the cost of potentially less accurate candidate models. ### _Anti-Random and Mirrored Noise_ Neuroevolutionary methods typically use large populations of small networks that can be evaluated very quickly. Deep neural networks can be very costly in both runtime and memory requirements, which invariably means that our populations will be much smaller. For this reason, the random perturbations can be a large source of variance since we have a much smaller pool of candidates to choose from. Anti-random and mirrored noise can be effective tools for reducing this variance by automatically generating sets of opposed child networks. In the case of dynamic subspace evolution, anti-random sampling can be used to maximize subspace distance between population members. For a given network with \(w\) weights and a random bit mask \(M=\{0,1\}^{w}\), the most distant vector is one in which the polarity of all of the bits are flipped, \(M^{\prime}=1-M\)[25, 35]. The two masks can then be applied to two different networks, resulting in an even exploration over all the parameters of the network. This can be extended to multiple members in a population by instead partitioning the parameter space, where a group of \(N\) bitmasks are created such that the children form a disjoint union over all parameters of the parent network. Each child then mutates a unique set of parameters that are not shared by other networks. Mirrored sampling is another effective technique for reducing variance that is common in evolutionary literature [5, 28]. In this case, the magnitude of the noise vectors are flipped, such that for a given mutation vector \(\gamma\), the mirrored noise would then be \(-\gamma\). For example, assume that a bit mask \(M\) is randomly generated for a network containing \(w\) weights by sampling a binary distribution with some probability \(\rho\) that a parameter will be masked, and a noise vector \(N\) is randomly generated by sampling from a Gaussian distribution. The four resulting child networks \(C\), parameterized by \(\theta\), can then be described: \[M \sim\text{Bernoulli}_{w}(\rho)\] \[N \sim\mathcal{N}_{w}(\mu,\sigma^{2})\] \[C_{1} =\theta+(N\circ M)\] \[C_{2} =\theta+(N\circ(1-M))\] \[C_{3} =\theta-(N\circ M)\] \[C_{4} =\theta-(N\circ(1-M))\] With these two techniques, a set of child networks can be automatically generated for every noise perturbation, resulting in a population that is more evenly distributed in space. ### _Predictions_ Once we have a collection of mutated child networks, we evaluate each child on a validation set where the accuracy is recorded as a measure of its fitness. Any model selection is done on this validation set, which is separate than the holdout test set we evaluate our final system on. We then select child networks with the best fitness to be used for making predictions. With a population of top candidates selected, we then need to combine them in order to make our final predictions. The traditional evolutionary approach is to average the weights of the top candidates together into a single model. There is also a natural connection to ensemble learning where the population can be evaluated independently and the predictions of each model can be combined. We explore both approaches in our ablation experiment in section IV, where we find that averaged models maintain their performance for much stronger levels of mutation, while the generalization accuracy of ensembles tend to outperform averaged models for tuned mutation hyperparameters. There are a lot of approaches for combining models predictions in ensembles, including majority vote, weighted model averaging, and bucket-of-models selection [10, 11, 20]. For the purpose of this work, we use simple non-weighted prediction averaging which is standard practice for modern ensemble methods. Mutations can affect the magnitude of raw outputs, so in order to achieve better normalization, we average the softmax of each ensemble member's outputs to ensure that all ensemble members produce outputs of the same scale. \[y_{e}=argmax(\frac{1}{S}\sum_{i=1}^{S}\sigma(f_{i}(x)))\] where \(y_{e}\) is the ensemble prediction, \(S\) is the number of members in the ensemble (the ensemble size), \(\sigma\) is the softmax function and \(f_{i}(x)\) is the output of the individual ensemble member \(i\). ## IV Experiments We first attempt to gain insight into how mutations influence network predictions by visualizing the decision boundaries of a trained network before and after mutating it. We then conduct a larger scale ablation experiment with a wide residual network where we generate populations of mutated networks with varying levels of mutation strengths and sparsities. We explore the differences between static and dynamic subspace evolution as well as differences between treating the top candidates in a population as an ensemble or averaging their weights together into one model. We end with a large scale experiment on the difficult ImageNet dataset with a dozen different model architectures. ### _Decision Boundaries_ We begin by exploring the interplay between mutation sparsity and mutation strength by visualizing how changing these values affects network predictions. We use a simple three layer fully connected multilayer perceptron that contains 64 neurons in each layer. We train this model on a binary interleaved spiral dataset that contains 2500 sample (x, y) points. We train for 10 epochs using the Adam optimizer [17] with a learning rate \(0.001\) and use this trained model as the starting point for all mutations. We then perturb the model with mutations sampled from a Gaussian Distribution \(N\sim\mathcal{N}(0,\sigma^{2})\) where \(\sigma\) corresponds to the mutation strength and with a mask sampled from a Bernoulli distribution \(M\sim\text{Bernoulli}(\rho)\) where \(\rho\) governs the probability of mutation sparsity. We ablate these hyperparameters from mutation strengths of \(\sigma\in[0.05,0.25]\) and mutation sparsities of \(\rho\in[0.0,0.9]\). Using the perturbed models, we make predictions on a holdout test set containing 250 samples. In figure 2, we display a grid of the decision boundaries of these perturbed models on the test set as we ablate between mutation strength and sparsity. When mutations are dense, we see a very quick collapse of performance for small perturbations. The window for optimal dense mutation is quite small even for this toy network and simple dataset. When mutations become more sparse, the model is able to maintain a good classification boundaries while displaying small variations of prediction diversity. This kind of behavior is desirable for neuroevolutionary populations as ensemble learning research has shown that populations perform better with large numbers of both accurate and diverse members [3]. Fig. 2: The images above detail decision boundaries for a trained three layer multilayer perceptron after being perturbed with various mutations. With dense perturbations, there is a comparatively small window between a functional decision boundary and complete performance collapse. As mutations become more sparse, the model is better able to retain its behavior while the strength of parameter mutation increases. ### _Mutation Ablations_ Next, we aim to explore whether the intuitions about how mutations affect predictions translate to a much larger convolutional network on benchmark computer vision datasets. For this experiment, we conduct ablations on the CIFAR-10 and CIFAR-100 datasets [18]. These are popular benchmark datasets and their use is widespread in computer vision research. They each contain 50,000 training and 10,000 test samples of colored 32x32 pixel images. CIFAR-10 contains samples belonging to one of 10 classes while CIFAR-100 contains samples belonging to one of 100 classes. We use a WideResNet-28x10 model for our parent network, which is a highly accurate network architecture that contains \(\sim 36M\) parameters. This network is a variant on the popular ResNet that decreases the depth and increases the width of each convolutional layer [38]. We implement a standard training algorithm for this type of model where we train for 100 epochs using Stochastic Gradient Descent with Nestorov momentum [30]. A stepwise learning rate decay is used where an inital value of 0.1 decays to 0.01 after 50% of training and decays again to 0.001 for the final 10% of training. We use standard data augmentation schemes for CIFAR that includes a random crop and random horizontal flip along with mean standard normalization. We split the test set in half in order to conduct a validation set used for model selection. The parent networks achieve an accuracy of approximately 96% on CIFAR-10 and approximately 80% on CIFAR-100. Using these saved models as parent networks, we then perturb the models with varying amounts of mutation strengths, \(\sigma\in(0.0,0.05]\), and sparsities, \(\rho\in[0.01,0.99]\), and we evaluate their performance on the test sets. Figure 3 displays the results of these experiments on CIFAR-10 where we start by measuring the effect that mutations have on KL Divergence. Predictably, we see that KL divergence quickly increases as the density and strength of mutations increase. The rapid increase in KL divergence for dense perturbations illustrates how quickly performance collapses between very small changes in mutation strength. We then implement a single evolutionary generation where a population of 16 models are created by perturbing the parent model according to a given mutation strength and sparsity. The top 4 models with the best accuracy on the validation set are then selected and evaluated. We then report accuracy on the test set where the weights of the top 4 models are averaged together. We also evaluate these four models as if they were part of an ensemble, where each model is evaluated independently and their predictions are combined. The averaged weights model is much more consistent compared to the ensemble which displays large amounts of variance for strong mutations. While the differences are small in this case, ensembles tend to slightly outperform averaged weight models for lower levels of mutation strengths (where individual model accuracy is higher) while averaged models outperform ensembles for higher levels of mutation (where individual model performance is worse). We repeat the above experiments with both static and dynamic subspace evolution. For static subspace evolution, we mutate the same subnetwork for each generated member in the population. For dynamic subspace evolution, each child is mutated with a random subnetwork mask. On this task, we see little difference between static and dynamic subspaces, suggesting that the specific subnetwork that we mutate does not matter in the context of both averaged and ensemble evaluations. ### _Large Scale Evaluation_ We then explore our approach on the difficult benchmark computer vision dataset, ImageNet [19]. ImageNet is a large scale collection of images that have been hand labelled for use in machine learning tasks and is organized according to the wordnet dataset hierarchy. Over 14 million images and 20,000 labels have been collected in total. We use the 2012 ImageNet collection which consists of a training set of 1.2 million images and a validation set of 50,000 images, each belonging to one of 1000 categories. Images have varying sizes and aspect ratios and consist of both colored and grayscale photos. We normalize all images are normalized with mean and standard deviation scaling and we implement standard data augmentation which consists of resizing to 256x256 pixels and center cropping to 224x224 pixels. We evaluate our approach with ten popular deep neural network architectures of varying sizes and generalization capacities in order to demonstrate the generality and power of our approach in many different contexts. All networks are pretrained and available from the torchvision repository [27]. We break the ILSVRC 2012 set of 50,000 images into a 50/50 split between validation and test sets that each contain 25,000 samples. All fitness evaluations and model selections use the validation set and all reported results are evaluated on the holdout test set. We report accuracy, negative log likelihood, and estimated calibration error for the parent network and our evolutionary ensemble. We run each model twice and report the best results. We conduct a hyperparameter grid search in order to find appropriate hyperparameter values for mutation strength and sparsity for each model. Using a separate holdout dataset of 1000 samples we measure the average KL Divergence and accuracy while we ablate sparsity \(\rho\in[0.5,0.99]\) and mutation strength \(\sigma\in[0.001,0.015]\). We then choose both a mutation strength and sparsity value that maximizes accuracy while targeting a KL Divergence of \(\sim 0.05\). This value was found to be an empirically safe option for most models, balancing the accuracy of candidate models with exploration to reliably improve generalization performance when candidates are combined. Using the sparse mutation hyperparameters found from the short grid search, a population of 16 models are created with mirrored sampling. Each is evaluated on a validation set and the top 8 candidates with the best accuracy are selected. We then evaluate the top 8 candidates independently and combine their predicted probabilities as if they were part of an ensemble and report the accuracy, negative log likelihood and expected calibration error. Table I contains both the parent results and the ensemble results. From only one generation of evolutionary tuning, we see consistent improvement in all metrics for each model. While the difference between generalization performance is very small in many cases, it is notable that convergence is extremely stable and appears to be monotonic with optimal mutation hyperparameters. We don't observe the noisy oscillations you'd commonly see with fine tuning using stochastic gradient descent. There's also a question of how much potential is able to be squeezed out of the fully trained networks. There is no accepted method for determining the theoretical ceiling of generalization capacity for complex network architectures. It is possible that these small improvements are significant in the grand scheme of things where an improvement of 0.5% accuracy on a dataset of 50,000 images corresponds to 250 more correct predictions, which can be significant in some contexts. It is notable that our results only incorporate a single generation of evolutionary fine tuning and future research with more iterations and more thorough hyperparameter searches will likely improve performance further. ## V Conclusion We introduce Sparse Mutation Decompositions as an approach to alleviating the challenges of mutating deep neural networks by breaking up dense mutations into low-dimensional subspaces. This widens the critical mutation window, which can significantly reduce the variance as children in evolutionary populations can handle stronger perturbations before performance collapse. We explore how these sparse mutations can be implemented with a standard neuroevolutionary method in order to fine-tune and further optimize pre-trained networks and we show how ensembles of the top candidates in a population can aid in generalization. We conduct several ablation studies in order to explore the interplay between sparsity and mutation strength on network behavior. We conduct a decision boundary experiment where visualizations are created that show the predictions of a model on a binary classification dataset. After being perturbed with dense mutations, the model sees a rapid decline in performance while sparsity significantly helps in maintaining accurate predictions while encouraging diverse representation. We then explore how these insights translate \begin{table} \begin{tabular}{l r r r r r r r r r r} \hline \hline Model & Acc \(\uparrow\) & NLL \(\downarrow\) & ECE \(\downarrow\) & eAcc \(\uparrow\) & eNLL \(\downarrow\) & eECE \(\downarrow\) & \(\Delta\)Acc \(\uparrow\) & Parameters & \(\sigma\) & \(\rho\) & \(\overline{KL}\) \\ \hline AlexNet & 56.46 & 1.904 & 0.021 & 56.52 & 1.903 & 0.019 & 0.06 & 61.1M & 0.006 & 0.80 & 0.038 \\ DenseNet-121 & 74.43 & 1.014 & 0.024 & 74.68 & 1.004 & 0.021 & 0.25 & 8.0M & 0.007 & 0.85 & 0.077 \\ Inception-V3 & 69.57 & 1.819 & 0.184 & 69.98 & 1.681 & 0.169 & 0.41 & 27.2M & 0.008 & 0.90 & 0.037 \\ MobileNet-V2 & 72.12 & 1.136 & 0.072 & 72.14 & 1.132 & 0.024 & 0.02 & 3.5M & 0.007 & 0.90 & 0.085 \\ ResNet-18 & 69.76 & 1.247 & 0.026 & 69.93 & 1.238 & 0.022 & 0.17 & 11.7M & 0.010 & 0.90 & 0.060 \\ ResNet-50 & 77.64 & 0.945 & 0.065 & 77.72 & 0.929 & 0.059 & 0.08 & 25.0M & 0.005 & 0.80 & 0.051 \\ ShuffleNet-V2 & 69.51 & 1.354 & 0.072 & 69.57 & 1.351 & 0.071 & 0.06 & 2.3M & 0.012 & 0.95 & 0.052 \\ SqueezeNet & 58.10 & 1.852 & 0.017 & 58.14 & 1.852 & 0.017 & 0.04 & 1.2M & 0.007 & 0.75 & 0.072 \\ VGG-16 & 71.62 & 1.140 & 0.027 & 71.64 & 1.138 & 0.028 & 0.02 & 138.4M & 0.010 & 0.95 & 0.019 \\ WideResNet-50 & 78.47 & 0.879 & 0.054 & 78.60 & 0.852 & 0.038 & 0.13 & 68.9M & 0.011 & 0.90 & 0.178 \\ \hline \hline \end{tabular} \end{table} TABLE I: Results for sparse mutation ensembles on ImageNet with a wide variety of models. Mutation sparsity and strength are determined from a small hyperparameter grid search. Accuracy (Acc), Negative Log Likelihood (NLL), and expected calibration error (ECE) are reported for the parent and the ensemble. Metrics prepended with \(e\) refer to the ensemble results. \(\Delta\)Acc is the change in accuracy between the parent and the ensemble, \(\sigma\) is the mutation strength, \(\rho\) is the mutation sparsity, and \(\overline{KL}\) is the average output divergence between the mutated models and the parent models. 16 models are generated and the 8 most accurate candidates on a validation set are used together as an ensemble. We see a small but reliable improvement in generalization performance in every single case. Fig. 3: Results of mutation ablations on a trained WideResNet-28x10 model on CIFAR-10. Lines of different colors correspond to the percentage of sparsity for the mutations. Dashed lines correspond to static subspace evolution while solid lines correspond to dynamic subspace evolution. The left most graph reports the average KL divergence. The middle graph reports the accuracy of an averaged model of the four top candidates after one generation of evolution. The rightmost graph reports accuracy if you treat those four models as an ensemble. to a wide residual network on CIFAR where we explore parameter sensitivity, static/dynamic subspace evolution, and the differences between averaged model performance and population ensemble performance as result of different levels of mutation strength and sparsity. Our findings reaffirm the idea that sparse mutations produce more accurate models more reliably than dense mutations. We then introduce the first large scale exploration of evolutionary fine tuning with sparse mutations on ImageNet with a wide variety of deep neural network architectures. We use a relatively small population of 16 models in which the top 8 most accurate on a validation set are selected and evaluated together as an ensemble. Our approach reliably and consistently improves performance on every model with only a single evolutionary generation.
2306.16361
Beyond NTK with Vanilla Gradient Descent: A Mean-Field Analysis of Neural Networks with Polynomial Width, Samples, and Time
Despite recent theoretical progress on the non-convex optimization of two-layer neural networks, it is still an open question whether gradient descent on neural networks without unnatural modifications can achieve better sample complexity than kernel methods. This paper provides a clean mean-field analysis of projected gradient flow on polynomial-width two-layer neural networks. Different from prior works, our analysis does not require unnatural modifications of the optimization algorithm. We prove that with sample size $n = O(d^{3.1})$ where $d$ is the dimension of the inputs, the network trained with projected gradient flow converges in $\text{poly}(d)$ time to a non-trivial error that is not achievable by kernel methods using $n \ll d^4$ samples, hence demonstrating a clear separation between unmodified gradient descent and NTK. As a corollary, we show that projected gradient descent with a positive learning rate and a polynomial number of iterations converges to low error with the same sample complexity.
Arvind Mahankali, Jeff Z. Haochen, Kefan Dong, Margalit Glasgow, Tengyu Ma
2023-06-28T16:45:38Z
http://arxiv.org/abs/2306.16361v2
Beyond NTK with Vanilla Gradient Descent: A Mean-Field Analysis of Neural Networks with Polynomial Width, Samples, and Time ###### Abstract Despite recent theoretical progress on the non-convex optimization of two-layer neural networks, it is still an open question whether gradient descent on neural networks without unnatural modifications can achieve better sample complexity than kernel methods. This paper provides a clean mean-field analysis of projected gradient flow on polynomial-width two-layer neural networks. Different from prior works, our analysis does not require unnatural modifications of the optimization algorithm. We prove that with sample size \(n=O(d^{3.1})\) where \(d\) is the dimension of the inputs, the network converges in polynomially many iterations to a non-trivial error that is not achievable by kernel methods using \(n\ll d^{4}\) samples, hence demonstrating a clear separation between unmodified gradient descent and NTK. ## 1 Introduction Training neural networks requires optimizing non-convex losses, which is often practically feasible but still not theoretically understood. The lack of understanding of non-convex optimization also limits the design of new principled optimizers for training neural networks that use theoretical insights. Early analysis on optimizing neural networks with linear or quadratic activations (Du and Lee, 2018; Gunasekar et al., 2018; Li et al., 2018; Gunasekar et al., 2018; Soltanolkotabi et al., 2018; Hardt and Ma, 2016; Kawaguchi, 2016; Hardt et al., 2018; Oymak and Soltanolkotabi, 2020) relies on linear algebraic tools that do not extend to nonlinear and non-quadratic activations. The neural tangent kernel (NTK) approach analyzes nonconvex optimization under certain hyperparameter settings, e.g., when the initialization scale is large and the learning rate is small (see, e.g., Du et al. (2018); Jacot et al. (2018); Li and Liang (2018); Arora et al. (2019); Daniely et al. (2016)). However, subsequent research shows that neural networks trained with practical hyperparameter settings typically outperform their corresponding NTK kernels (Arora et al., 2019). Furthermore, the initialization and learning rate under the NTK regime does not yield optimal generalization guarantees (Wei et al., 2019, Chizat and Bach, 2018; Woodworth et al., 2020; Ghorbani et al., 2020). Many recent works study modified versions of stochastic gradient descent (SGD) and prove sample complexity and runtime guarantees beyond NTK (Damian et al., 2022; Abbe et al., 2022; Mousavi-Hosseini et al., 2022; Li et al., 2020; Abbe et al., 2021; Allen-Zhu et al., 2019; Allen-Zhu and Li, 2019; Xu and Du, 2023; Wu et al., 2023; Daniely and Malach, 2020; Allen-Zhu and Li, 2020; Suzuki and Akiyama, Telgarsky, 2022; Chen et al., 2020). These modified algorithms often contain multiple stages that optimize different blocks of parameters and/or use different learning rates or regularization strengths. For example, the work of Li et al. (2020) uses a non-standard parameterization for two-layer neural networks and runs a two-stage algorithm with sharply changing gradient clipping strength; Damian et al. (2022) use one step of gradient descent with a large learning rate and then optimize only the last layer of the neural net. However, oftentimes vanilla (stochastic) gradient descent with a constant learning rate empirically converges to a minimum with good generalization error. Thus, the modifications are arguably artifacts tailored to the analysis, and to some extent, over-using the modification may obscure the true power of gradient descent. Another technique to analyze optimization dynamics uses the mean-field approach (Chizat and Bach, 2018; Mei et al., 2018; Mei et al., 2019; Sirignano and Spiliopoulos, 2018; Dou and Liang, 2021; Abbe et al., 2022; Javanmard et al., 2020; Sirignano and Spiliopoulos, 2020; Wei et al., 2019), which views the collection of weight vectors as a (discrete) distribution over \(\mathbb{R}^{d}\) (where \(d\) is the input dimension) and approximates its evolution by an infinite-width neural network, where the weight vector distribution is continuous and evolves according to a partial differential equation. However, these works do not provide an end-to-end polynomial runtime bound on the convergence to a global minimum in the concrete setting of two-layer neural networks (without modifying gradient descent). For example, the works of Chizat and Bach (2018) and Mei et al. (2018) do not provide a concrete bound on the number of iterations needed for convergence. Mei et al. (2019) provide a coupling between the trajectories of finite and infinite-width networks with exponential growth of coupling error but do not apply it to a concrete setting to obtain a global convergence result with sample complexity guarantees. (See more discussion below and in Section 3.) Wei et al. (2019) achieve a polynomial number of iterations but require exponential width and an artificially added noise. In other words, these works, without modifying gradient descent, cannot prove convergence to a global minimizer with polynomial width and iterations. In this paper, we provide a mean-field analysis of projected gradient flow on two-layer neural networks with _polynomial width_ and quartic activations. Under a simple data distribution, we demonstrate that the network converges to a non-trivial error in _polynomial time_ with _polynomial samples_. Notably, our results show a sample complexity that is superior to the NTK approach. Concretely, the neural network is assumed to have unit-norm weight vectors and no bias, and the second-layer weights are all \(\frac{1}{m}\) where \(m\) is the width of the neural network. The data distribution is uniform over the sphere. The target function is of the form \(y(x)=h(q_{*}^{\top}x)\) where \(h\) is an _unknown_ quartic link function and \(q_{*}\) is an unknown unit vector. Our main result (Theorem 3.4) states that with \(n=O(d^{3.1})\) samples, a polynomial-width neural network with random initialization converges in polynomial time to a non-trivial error, which is statistically not achievable by any kernel method with an inner product kernel using \(n\ll d^{4}\) samples. To the best of our knowledge, our result is the first to demonstrate the advantage of _unmodified_ gradient descent on neural networks over kernel methods. The rank-one structure in the target function, also known as the single-index model, has been well-studied in the context of neural networks as a simplified case to demonstrate that neural networks can learn a latent feature direction \(q_{*}\) better than kernel methods (Mousavi-Hosseini et al., 2022; Abbe et al., 2022; Damian et al., 2022; Bietti et al., 2022). Many works on single-index models study (stochastic) gradient descent in the setting where only a single vector (or "neuron") in \(\mathbb{R}^{d}\) is trained. This includes earlier works where the link function is monotonic, or the convergence is analyzed with quasi-convexity (Kakade et al., 2011; Hazan et al., 2015; Mei et al., 2018; Soltanolkotabi, 2017), along with more recent work (Tan and Vershynin, 2019; Vardi et al., 2021; Arous et al., 2021) for more general link functions. Since these works only train a single neuron, they have limited expressivity in comparison to a neural network, and can only achieve zero loss when the link function equals the activation. We stress that in our setting, the link function \(h\) is unknown, not necessarily monotonic, and does not need to be equal to the activation function. We show that in this setting, the first layer weights will converge to a _distribution_ of neurons that are correlated with but not exactly equal to \(q_{\star}\), so that even without bias terms, their mixture can represent the link function \(h\). Our analysis demonstrates that gradient descent with a consistent infinitesimal learning rate can _simultaneously_ learn the feature \(q_{\star}\) and the link function \(h\), which is a key challenge that is side-stepped in previous works on neural-networks which use two-stage algorithms (Mousavi-Hosseini et al., 2022; Abbe et al., 2021, 2022; Damian et al., 2022; Abbe et al., 2023; Barak et al., 2022). The main novelty of our population dynamics analysis is designing a potential function that shows that the iterate stays away from the saddle points. Our sample complexity results leverage a coupling between the dynamics on the empirical loss for a finite-width neural network and the dynamics on the population loss for an infinite-width neural network. The main challenge stems from the fact that some exponential coupling error growth is inevitable over a certain period of time when the dynamics resemble a power method update. Heavily inspired by Li et al. (2020), we address this challenge by using a direct and sharp comparison between the growth of the coupling error and the growth of the signal. In contrast, a simple exponential growth bound on the coupling error similar to the bound of Mei and Montanari (2019) would result in a \(d^{O(1)}\) sample complexity, which is not sufficient to outperform NTK. ## 2 Preliminaries and Notations We use \(O(\cdot),\lesssim,\gtrsim\) to hide only absolute constants. Formally, every occurrence of \(O(x)\) in this paper can be simultaneously replaced by a function \(f(x)\) where \(|f(x)|\leq C|x|,\forall x\in\mathbb{R}\) for some universal constant \(C>0\) (each occurrence can have a different universal constant \(C\) and \(f\)). We use \(a\lesssim b\) as a shorthand for \(a\leq O(b)\). Similarly, \(\Omega(x)\) is a placeholder for some \(g(x)\) where \(|g(x)|\geq|x|/C,\forall x\in\mathbb{R}\) for some universal constant \(C>0\). We use \(a\gtrsim b\) as a shorthand for \(a\geq\Omega(b)\) and \(a\asymp b\) as a shorthand to indicate that \(a\gtrsim b\) and \(a\lesssim b\) simultaneously hold. **Legendre Polynomials.** We summarize the necessary facts about Legendre polynomials below, and present related background more comprehensively in Appendix A. Let \(P_{k,d}:[-1,1]\to\mathbb{R}\) be the degree-\(k\)_unnormalized_ Legendre polynomial (Atkinson and Han, 2012), and \(\overline{P}_{k,d}(t)=\sqrt{N_{k,d}}P_{k,d}(t)\) be the _normalized_ Legendre polynomial, where \(N_{k,d}\triangleq\binom{d+k-1}{d-1}-\binom{d+k-3}{d-1}\) is the normalizing factor. The polynomials \(\overline{P}_{k,d}(t)\) form an orthonormal basis for the set of square-integrable functions over \([-1,1]\) with respect to the measure \(\mu_{d}(t)\triangleq(1-t^{2})^{\frac{d-3}{2}}\frac{\Gamma(d/2)}{\Gamma((d-1)/2 )}\frac{1}{\sqrt{\pi}}\), i.e., the density of \(u_{1}\) when \(u=(u_{1},\cdots,u_{d})\) is uniformly drawn from sphere \(\mathbb{S}^{d-1}\). Hence, for every function \(h:[-1,1]\to\mathbb{R}\) such that \(\mathbb{E}_{t\sim\mu_{d}}[h(t)^{2}]<\infty\), we can define \(\hat{h}_{k,d}\triangleq\mathbb{E}_{t\sim\mu_{d}}[h(t)\overline{P}_{k,d}(t)]\) and consequently, we have \(h(t)=\sum_{k=0}^{\infty}\hat{h}_{k,d}\overline{P}_{k,d}(t)\). ## 3 Main Results We will formally define the data distribution, neural networks, projected gradient flow, and assumptions on the problem-dependent quantities and then state our main theorems. _Target function._ The ground-truth function \(y(x):\mathbb{R}^{d}\to\mathbb{R}\) that we aim to learn has the form \(y(x)=h(q_{\star}^{\top}x)\), where \(h:\mathbb{R}\to\mathbb{R}\) is an _unknown_ one-dimensional _even_ quartic polynomial (which is called a link function), and \(q_{\star}\) is an _unknown_ unit vector in \(\mathbb{R}^{d}\). Note that \(h(s)\) has the Legendre expansion \(h(s)=\hat{h}_{0,d}+\hat{h}_{2,d}\overline{P}_{2,d}(s)+\hat{h}_{4,d}\overline{ P}_{4,d}(s)\). _Two-layer neural networks._ We consider a two-layer neural network where the first-layer weights are all unit vectors and the second-layer weights are fixed and all the same. Let \(\sigma(\cdot)\) be the activation function, which can be different from \(h(\cdot)\). Using the mean-field formulation, we describe the neural network using the distribution of first-layer weight vectors, denoted by \(\rho\): \[f_{\rho}(x)\triangleq\mathbb{E}_{u\sim\rho}[\sigma(u^{\top}x)]\,. \tag{3.1}\] For example, when \(\rho=\text{unif}(\{u_{1},\ldots,u_{m}\})\), is a discrete, uniform distribution supported on \(m\) vectors \(\{u_{1},\ldots,u_{m}\}\), then \(f_{\rho}(x)=\frac{1}{m}\sum_{i=1}^{m}\sigma(u_{i}^{\top}x)\), i.e. \(f_{\rho}\) corresponds to a finite-width neural network whose first-layer weights are \(u_{1},\ldots,u_{m}\). For a continuous distribution \(\rho\), the function \(f_{\rho}(\cdot)\) can be viewed as an infinite-width neural network where the weight vectors are distributed according to \(\rho\) (and can be viewed as taking the limit as \(m\to\infty\) of the finite-width neural network). We assume that the weight vectors have unit norms, i.e. the support of \(\rho\) is contained in \(\mathbb{S}^{d-1}\). The activation \(\sigma:\mathbb{R}\to\mathbb{R}\) is assumed to be a fourth-degree polynomial with Legendre expansion \(\sigma(s)=\sum_{k=0}^{4}\hat{\sigma}_{k,d}\overline{P}_{k,d}(s)\). The simplified neural network defined in Eq. (3.1), even with infinite width (corresponding to a continuous distribution \(\rho\)), has a limited expressivity due to the lack of biases and trainable second-layer weights. We characterize the expressivity by the following lemma: **Lemma 3.1** (Expressivity).: _Let \(\gamma_{2}=\hat{h}_{2,d}/\hat{\sigma}_{2,d}\) and \(\gamma_{4}=\hat{h}_{4,d}/\hat{\sigma}_{4,d}\). Suppose for some \(d\) and \(q_{\star}\), there exists a network \(\rho\) such that \(f_{\rho}(x)=h(q_{\star}^{\top}x)\) on \(\mathbb{S}^{d-1}\). Then we have \(\hat{\sigma}_{0,d}=\hat{h}_{0,d}\), and \(0\leq\gamma_{2}^{2}\leq\gamma_{4}\leq\gamma_{2}\leq 1\). Moreover, if this condition holds with strict inequalities, then for sufficiently large \(d\), there exists a network \(\rho\) such that \(f_{\rho}(x)=h(q_{\star}^{\top}x)\) on \(\mathbb{S}^{d-1}\). (A more explicit version is stated in Appendix D.1.)_ Informally, an almost sufficient and necessary condition to have \(f_{\rho}(x)=h(q_{\star}^{\top}x)\) for some \(\rho\) is that there exists a random variable \(w\) supported on \([0,1]\) such that \(\mathbb{E}[w^{2}]\approx\gamma_{2}\) and \(\mathbb{E}[w^{4}]\approx\gamma_{4}\), which is equivalent to \(0\leq\gamma_{2}^{2}\leq\gamma_{4}\leq\gamma_{2}\leq 1.\) In particular, assuming the existence of such a random variable \(w\), the perfectly-fit network \(\rho\) that fits the target function has the form \[q_{\star}^{\top}u\stackrel{{ d}}{{=}}w,\text{ and }u-q_{\star}q_{ \star}^{\top}u\mid q_{\star}^{\top}u\text{ is uniformly distributed in the subspace orthogonal to }q_{\star}\,. \tag{3.2}\] Motivated by this lemma, we will assume that \(\gamma_{2},\gamma_{4}\) are universal constants that satisfy \(0\leq\gamma_{2}^{2}\leq\gamma_{4}\leq\gamma_{2}\leq 1\), and \(d\) is chosen to be sufficiently large (depending on the choice of \(\gamma_{2}\) and \(\gamma_{4}\)). In addition, we also assume that \(\gamma_{4}\leq O(\gamma_{2}^{2})\), that is, the equality \(\gamma_{2}^{2}\leq\gamma_{4}\) is somewhat tight -- this ensures that the distribution of \(w=q_{\star}^{\top}u\) under the perfectly-fitted neural network is not too spread-out. We also assume that \(\gamma_{2}\) is smaller than a sufficiently small universal constant. This ensures that the distribution of \(q_{\star}^{\top}u\) under the perfectly-fitted network does not concentrate on \(1\), i.e. the distribution of \(u\) is not merely a point mass around \(q_{\star}\). In other words, this assumption restricts our setting to the most interesting case where the landscape has bad saddle points (and thus is fundamentally more challenging to analyze). We also assume for simplicity that \(\hat{\sigma}_{0,d}=\hat{h}_{0,d}\) (because otherwise, the activation introduces a constant bias that prohibits perfect fitting), even though adding a trainable scalar to the neural network formulation can remove the assumption. In summary, we make the following formal assumptions on the Legendre coefficients of the link function and the activation function. **Assumption 3.2**.: _Let \(\gamma_{2}=\hat{h}_{2,d}/\hat{\sigma}_{2,d}\) and \(\gamma_{4}=\hat{h}_{4,d}/\hat{\sigma}_{4,d}\). We first assume \(\gamma_{4}\geq 1.1\cdot\gamma_{2}^{2}\). For any universal constant \(c_{1}>1\), we assume that \(\hat{\sigma}_{2,d}^{2}/c_{1}\leq\hat{\sigma}_{4,d}^{2}\leq c_{1}\cdot\hat{ \sigma}_{2,d}^{2}\), and \(\gamma_{4}\leq c_{1}\gamma_{2}^{2}\). For a sufficiently small universal constant \(c_{2}>0\) (which is chosen after \(c_{1}\) is determined), we assume \(0\leq\gamma_{2}\leq c_{2}\). We also assume that \(d\) is larger than a sufficiently large constant \(c_{3}\) (which is chosen after \(c_{1}\) and \(c_{2}\).) We also assume \(\hat{h}_{0,d}=\hat{\sigma}_{0,d}\), and \(\hat{h}_{1,d}=\hat{h}_{3,d}=0\)._ Our intention is to replicate the ReLU activation as well as possible with quartic polynomials; our assumption that \(\hat{\sigma}_{2,d}^{2}\asymp\hat{\sigma}_{4,d}^{2}\) is indeed satisfied by the quartic expansion of ReLU because \(\widehat{\operatorname{relu}_{2,d}}\asymp d^{-1/2}\) and \(\widehat{\operatorname{relu}_{2,d}}\asymp d^{-1/2}\) (see Proposition A.3). Following the convention defined in Section 2, we will simply write \(\hat{\sigma}_{4,d}^{2}\asymp\hat{\sigma}_{2,d}^{2}\), \(\gamma_{4}\geq 1.1\cdot\gamma_{2}^{2}\), and \(\gamma_{4}\lesssim\gamma_{2}^{2}\). _Data distribution, losses, and projected gradient flow._ The population data distribution is assumed to be \(\mathbb{S}^{d-1}\). We draw \(n\) training examples \(x_{1},\ldots,x_{n}\stackrel{{\mathrm{i.i.d}}}{{\sim}}\mathbb{S}^{d-1}\). Thus, the population and empirical mean-squared losses are: \[L(\rho)=\frac{1}{2}\cdot\operatorname*{\mathbb{E}}_{x\sim\mathbb{S}^{d-1}} \left(f_{\rho}(x)-y(x)\right)^{2},\qquad\qquad\text{ and }\quad\widehat{L}(\rho)=\frac{1}{2n}\sum_{i=1}^{n}\left(f_{\rho}(x_{i})-y(x_{i}) \right)^{2}\,. \tag{3.3}\] To ensure that the weight vectors remain on \(\mathbb{S}^{d-1}\), we perform projected gradient flow on the empirical loss. We start by defining the gradient of the population loss \(L\) with respect to a particle \(u\) at \(\rho\) and the corresponding Riemannian gradient (which is simply the projection of the gradient to the tangent space of \(\mathbb{S}^{d-1}\)): \[\nabla_{u}L(\rho)=\operatorname*{\mathbb{E}}_{x\sim\mathbb{S}^{d-1}}\left[(f_{ \rho}(x)-y(x))\sigma^{\prime}(u^{\top}x)x\right]\,,\qquad\qquad\text{and}\quad \operatorname{grad}_{u}L(\rho)=(I-uu^{\top})\nabla_{u}L(\rho)\,. \tag{3.4}\] Here we interpret \(L(\rho)\) as a function of a collection of particles (denoted by \(\rho\)) and \(\nabla_{u}L(\rho)\) as the partial derivative with respect to a single particle \(u\) evaluated at \(\rho\). Similarly, the (Riemannian) gradient of the empirical loss \(\widehat{L}\) with respect to the particle \(u\) is defined as \[\nabla_{u}\widehat{L}(\rho)=\frac{1}{n}\sum_{i=1}^{n}(f_{\rho}(x_{i})-y(x_{i}) )\sigma^{\prime}(u^{\top}x_{i})x_{i}\,,\qquad\qquad\text{and}\quad\operatorname {grad}_{u}\widehat{L}(\rho)=(I-uu^{\top})\text{grad}_{u}\widehat{L}(\rho)\,.\] **Population, Infinite-Width Dynamics.** Let the initial distribution \(\rho_{0}\) of the infinite-width neural network be the uniform distribution over \(\mathbb{S}^{d-1}\). We use \(\chi\) to denote a particle sampled uniformly at random from the initial distribution \(\rho_{0}\). A particle initialized at \(\chi\) follows a deterministic trajectory afterwards -- we use \(u_{t}(\chi)\) to denote the location, at time \(t\), of the particle that was initialized at \(\chi\). Because \(u_{t}(\cdot)\) is a deterministic function, we can use \(\chi\) to index the particles at any time based on their initialization. The projected gradient flow on an infinite-width neural network and using population loss \(L\) can be described as \[\forall\chi\in\mathbb{S}^{d-1},u_{0}(\chi) =\chi\,, \tag{3.5}\] \[\frac{du_{t}(\chi)}{dt} =-\text{grad}_{u_{t}}L(\rho_{t})\,,\] (3.6) \[\text{and }\rho_{t} =\text{distribution of }u_{t}(\chi)\text{ (where }\chi\sim\rho_{0})\,. \tag{3.7}\] **Empirical, Finite-Width Dynamics.** The training dynamics of a neural network with width \(m\) can be described in this language by setting the initial distribution to be a discrete distribution uniformly supported on \(m\) initial weight vectors. The update rule will maintain that at any time, the distribution of neurons is uniformly supported over \(m\) items and thus still corresponds to a width-\(m\) neural network. Let \(\chi_{1},\dots,\chi_{m}\overset{\text{i.i.d}}{\sim}\mathbb{S}^{d-1}\) be the initial weight vectors of the width-\(m\) neural network, and let \(\hat{\rho}_{0}=\text{unif}(\{\chi_{1},\dots,\chi_{m}\})\) be the uniform distribution over these initial neurons. We use \(\chi\in\{\chi_{1},\dots,\chi_{m}\}\) to index neurons and denote a single initial neuron as \(\hat{u}_{0}(\chi)=\chi\). Then, we can describe the projected gradient descent on the empirical loss \(\widehat{L}\) with initialization \(\{\chi_{1},\dots,\chi_{m}\}\) by: \[\frac{d\hat{u}_{t}(\chi)}{dt} =-\text{grad}_{\hat{u}_{t}(\chi)}\widehat{L}(\hat{\rho}_{t})\,, \tag{3.8}\] \[\text{and }\hat{\rho}_{t} =\text{distribution of }\hat{u}_{t}(\chi)\text{ (where }\chi\sim\hat{\rho}_{0})\,. \tag{3.9}\] We first state our result on the population, infinite-width dynamics. **Theorem 3.3** (Population, infinite-width dynamics).: _Suppose Assumption 3.2 holds, and let \(\epsilon\in(0,1)\) be the target error. Let \(\rho_{t}\) be the result of projected gradient flow on the population loss, initialized with the uniform distribution on \(\mathbb{S}^{d-1}\), as defined in Eq. (3.6). Let \(T_{*,\epsilon}=\inf\{t>0\mid L(\rho_{t})\leq\frac{1}{2}(\hat{\sigma}_{2,d}^{2 }+\hat{\sigma}_{4,d}^{2})\epsilon^{2}\}\) be the earliest time \(t\) such that a loss of at most \(\frac{1}{2}(\hat{\sigma}_{2,d}^{2}+\hat{\sigma}_{4,d}^{2})\epsilon^{2}\) is reached. Then, we have_ \[T_{*,\epsilon}\lesssim\frac{1}{\hat{\sigma}_{2,d}^{2}\gamma_{2}}\log d+\frac{( \log\log d)^{20}}{\hat{\sigma}_{2,d}^{2}\epsilon\gamma_{2}^{8}}\log\Big{(} \frac{\gamma_{2}}{\epsilon}\Big{)}\,. \tag{3.10}\] The first term on the right-hand side of Eq. (3.10) corresponds to the burn-in time for the network to reach a region around where the Polyak-Lojasiewicz condition holds. We divide our analysis of this burn-in phase into two phases, Phase 1 and Phase 2, and we obtain tight control on the factor by which the signal component, \(q_{*}^{\top}u\), grows during Phase 1, while Phase 2 takes place for a comparatively short period of time. (This tight control is critical for our sample complexity bounds where we must show that the coupling error does not blow up too much -- see more discussion below Lemma 5.4.) Phases 1 and 2 are mostly governed by the quadratic components in the activation and target functions, and the dynamics behave similarly to a power method update. After the burn-in phase, the dynamics operate for a short period of time (Phase 3) in a regime where the Polyak-Lojasiewicz condition holds. We explicitly prove the dynamics stay away from saddle points during this phase, as further discussed in Section 4.2. We note that Theorem 3.3 provides a concrete polynomial runtime bound for projected gradient flow which is not achievable by prior mean-field analyses (Chizat and Bach, 2018a; Mei et al., 2018) using Wasserstein gradient flow techniques. The main challenge in the proof is to deal with the saddle points that are not strict-saddle (Ge et al., 2015) in the loss landscape which cannot be escaped simply by adding noise in the parameter space (Jin et al., 2021; Lee et al., 2016; Daneshmand et al., 2018).1 Our analysis develops a fine-grained analysis of the dynamics that shows the iterates stay away from saddle points, which allows us to obtain the running time bound in Theorem 3.3 which can be translated to a polynomial-width guarantee in Theorem 3.4. In contrast, Wei et al. (2019) escape the saddles by randomly replacing an exponentially small fraction of neurons, which makes the network require exponential width. Footnote 1: There are a long list of prior works on the loss landscape of neural networks (Mei et al., 2018; Soltanolkotabi, 2017; Vardi et al., 2021; Kakade et al., 2011; Bietti et al., 2022; Ben Arous et al., 2020; Arous et al., 2021; Soudry and Carmon, 2016; Tian, 2017; Ge et al., 2017; Zhang et al., 2019; Brutzkus and Globerson, 2017). Next, we state the main theorem on projected gradient flow on empirical, finite-width dynamics. **Theorem 3.4** (Empirical, finite-width dynamics).: _Suppose Assumption 3.2 holds. Suppose \(\epsilon=\frac{1}{\log\log d}\) is the target error and \(T_{*,\epsilon}\) is the running time defined in Theorem 3.3. Suppose \(n\geq d^{\mu}(\log d)^{O(1)}\) for any constant \(\mu>3\), and the network width \(m\) is equal to \(d^{C}\) for some sufficiently large universal constant \(C\). Let \(\hat{\rho}_{t}\) be the projected gradient flow on the empirical loss, initialized with \(m\) uniformly sampled weights, defined in Eq. (3.9). Then, with probability at least \(1-\frac{1}{d^{2(\log d)}}\) over the randomness of the data and the initialization of the finite-width network, we have that_ \[L(\hat{\rho}_{T_{*,\epsilon}})\lesssim(\hat{\sigma}_{2,d}^{2}+\hat{\sigma}_{4,d}^{2})\epsilon^{2}\,. \tag{3.11}\] Plugging in \(\mu=3.1\), Eq. (3.11) suggests that, when the network width is at least \(d^{O(1)}\), the empirical dynamics of gradient descent could achieve \(\hat{\sigma}_{\max}^{2}(1/\log\log d)^{2}\) population loss with \(O(d^{3.1})\) samples. In our analysis, we will establish a coupling between neurons in the empirical dynamics and neurons in the population dynamics. The main challenge is to bound the coupling error during Phase 1 where the population dynamics are similar to the power method. During this phase, we show that the coupling error (i.e., the distance between coupled neurons) remains small by showing that it grows at most as fast as the growth of the signal \(q_{\star}^{T}u\) in the population dynamics. Such a delicate relationship between the growth of the error and that of the signal is essential to proving our sample complexity bound; even an additional constant factor in the growth rate of the coupling error would lead to an additional \(\operatorname{poly}(d)\) factor in the sample complexity. Prior work (Mei et al., 2019) also establishes a coupling between the population and empirical dynamics. However, without the comparison with the growth of the signal, they only show a generic exponential growth rate for the coupling error. As a result, their technique cannot achieve a better sample complexity than the NTK. The following theorem states that in the setting of Theorem 3.4, kernel methods with any inner product kernel require \(\Omega(d^{4}(\ln d)^{-6})\) samples to achieve a non-trivial population loss. (Note that the zero function has loss \(\mathbb{E}_{x\sim\mathbb{S}^{d-1}}[y(x)^{2}]\geq(\hat{h}_{4,d})^{2}\).) **Theorem 3.5** (Sample complexity lower bound for kernel methods).: _Let \(K\) be an inner product kernel. Suppose \(d\) is larger than a universal constant and \(n\lesssim d^{4}(\ln d)^{-6}\). Then, with probability at least \(1/2\) over the randomness of \(n\) i.i.d data points \(\{x_{i}\}_{i=1}^{n}\) drawn from \(\mathbb{S}^{d-1}\), any estimator of \(y\) of the form \(f(x)=\sum_{i=1}^{n}\beta_{i}K(x_{i},x)\) of \(\{x_{i}\}_{i=1}^{n}\) must have a large error:_ \[\mathbb{E}_{x\sim\mathbb{S}^{d-1}}(y(x)-f(x))^{2}\geq\tfrac{3}{4}(\hat{h}_{4, d})^{2}. \tag{3.12}\] Theorem 3.4 and Theorem 3.5 together prove a clear sample complexity separation between gradient flow and NTK. When \(\hat{h}_{4,d}\asymp\hat{\sigma}_{\max}\) (i.e., \(\gamma_{2},\gamma_{4}\asymp 1\)), gradient flow with finite-width neural networks can achieve \((\hat{h}_{4,d})^{2}(\log\log d)^{-2}\) population error with \(d^{3.1}\) samples, while kernel methods with any inner product kernel (including NTK) must have an error at least \((\hat{h}_{4,d})^{2}/2\) with \(d^{3.9}\) samples. On a high level, Abbe et al. (2022, 2023) prove similar lower bounds in a different setting where the target function is drawn randomly and the data points can be arbitrary (also see Kamath et al. (2020); Hsu et al. (2021); Hsu (2021)). In comparison, our lower bound works for a fixed target function by exploiting the randomness of the data points. In fact, we can strengthen Theorem 3.5 by proving a \(\Omega(\hat{h}_{k,d}^{2})\) loss lower bound for any universal constant \(k\geq 0\) (Theorem E.2). We defer the proof of Theorem 3.5 to Appendix E. ## 4 Analysis of Population Dynamics In this section, we give an overview of the analysis for the population infinite-width dynamics \(\rho_{t}\) (Theorem 3.3). The full proof is given in Appendix B. ### Symmetry of the Population Dynamics A key observation is that due to the symmetry in the data, the population dynamics \(\rho_{t}\) has a symmetric distribution in the subspace orthogonal to the vector \(q_{\star}\). As a result, the dynamics \(\rho_{t}\) can be precisely characterized by the dynamics in the direction of \(q_{\star}\). Recall that \(w=q_{\star}^{\top}u\) denotes the projection of a weight vector \(u\) in the direction of \(q_{\star}\). Let \(z=(I-q_{\star}q_{\star}^{\top})u\) be the remaining component. For notational convenience, without loss of generality, we can assume that \(q_{\star}=e_{1}\) and write \(u=(w,z)\), where \(w\in[-1,1]\) and \(z\in\sqrt{1-w^{2}}\cdot\mathbb{S}^{d-2}\). _We will use this convention throughout the rest of the paper_. We will use \(w_{t}(\chi)\) to refer to the first coordinate of the particle \(u_{t}(\chi)\), and \(z_{t}(\chi)\) to refer to the last \((d-1)\) coordinates. **Definition 4.1** (Rotational invariance and symmetry).: _We say a neural network \(\rho\) is rotationally invariant if for \(u=(w,z)\sim\rho\), the distribution of \(z\mid w\) is uniform over \(\sqrt{1-w^{2}}\mathbb{S}^{d-2}\) almost surely. We also say the network is symmetric w.r.t a variable \(w\) if the density of \(w\) is an even function._ We note that any polynomial-width neural network (e.g., \(\hat{\rho}\)) is very far from rotationally invariant, and therefore the definition is specifically used for population dynamics. If \(\rho\) is rotationally invariant and symmetric, then \(L(\rho)\) has a simpler form that only depends on the marginal distribution of \(w\). **Lemma 4.2**.: _Let \(\rho\) be a rotationally invariant neural network. Then, for any target function \(h\) and any activation \(\sigma\),_ \[L(\rho)=\tfrac{1}{2}\sum_{k=0}^{\infty}\Big{(}\hat{\sigma}_{k,d} \operatorname{\mathbb{E}}_{u\sim\rho}[P_{k,d}(w)]-\hat{h}_{k,d}\Big{)}^{2}\,. \tag{4.1}\] _In addition, suppose \(h\) and \(\sigma\) satisfy Assumption 3.2 and \(\rho\) is symmetric. Then_ \[L(\rho)=\tfrac{\hat{\sigma}_{2,d}^{2}}{2}\Big{(}\operatorname{ \mathbb{E}}_{u\sim\rho}[P_{2,d}(w)]-\gamma_{2}\Big{)}^{2}+\tfrac{\hat{\sigma} _{4,d}^{2}}{2}\Big{(}\operatorname{\mathbb{E}}_{u\sim\rho}[P_{4,d}(w)]-\gamma _{4}\Big{)}^{2}\,. \tag{4.2}\] The proof of Lemma 4.2 is deferred to Appendix B.1. Eq. (4.1) says that if \(\rho\) is rotationally invariant, then \(L(\rho)\) only depends on the marginal distribution of \(w\). Eq. (4.2) says that if the distribution of \(w\) is additionally symmetric and \(\sigma\) and \(h\) are quartic, then the terms corresponding to odd \(k\) and the higher order terms for \(k>4\) in Eq. (4.1) vanish. Note that \(P_{2,d}(s)\approx s^{2}\) and \(P_{4,d}(s)\approx s^{4}\) by Eq. (A.4). Thus, \(L(\rho)\) essentially corresponds to matching the second and fourth moments of \(w\) to some desired values \(\gamma_{2}\) and \(\gamma_{4}\). Inspired by this lemma, we define the following key quantities: for any time \(t\geq 0\), we define \(D_{2,t}=\operatorname{\mathbb{E}}_{u\sim\rho_{t}}[P_{2,d}(w)]-\gamma_{2}\) and \(D_{4,t}=\operatorname{\mathbb{E}}_{u\sim\rho_{t}}[P_{4,d}(w)]-\gamma_{4}\), where \(\rho_{t}\) is defined according to the population, infinite-width dynamics (Eq. (3.5), Eq. (3.6) and Eq. (3.7)). We next show that the rotational invariance and symmetry properties of \(\rho_{t}\) are indeed maintained: **Lemma 4.3**.: _Suppose we are in the setting of Theorem 3.3. At any time \(t\in[0,\infty)\), \(\rho_{t}\) is symmetric and rotationally invariant._ The proof of Lemma 4.3 is deferred to Appendix B.2. To show rotational invariance, we use Eq. (3.4) and the rotational invariance of the data in the last \((d-1)\) coordinates. To show symmetry, we use Eq. (4.1), and the facts that \(P_{k,d}\) is an odd polynomial for odd \(k\) and \(\rho_{t}\) is symmetric at initialization. As a consequence of Lemma 4.2 and Lemma 4.3, we obtain a simple formula for the dynamics of \(w_{t}\): **Lemma 4.4** (\(1\)-dimensional dynamics).: _Suppose we are in the setting of Theorem 3.3. Then, for any \(\chi\in\mathbb{S}^{d-1}\), writing \(w_{t}:=w_{t}(\chi)\), we have_ \[\frac{dw_{t}}{dt}=\underbrace{-(1-w_{t}^{2})\cdot(P_{t}(w_{t})+Q_{t}(w_{t}))}_ {\triangleq_{v}(w_{t})}, \tag{4.3}\] _where for any \(w\in[-1,1]\), we have \(P_{t}(w)=2\hat{\sigma}_{2,d}^{2}D_{2,t}w+4\hat{\sigma}_{4,d}^{2}D_{4,t}w^{3}\), and \(Q_{t}(w)={\lambda_{d}}^{(1)}w+{\lambda_{d}}^{(3)}w^{3}\), where \(|{\lambda_{d}}^{(1)}|,|{\lambda_{d}}^{(3)}|\lesssim\frac{\hat{\sigma}_{2,d}^{2 }|D_{2,t}|+\hat{\sigma}_{4,d}^{2}|D_{4,t}|}{d}\). More specifically, \({\lambda_{d}}^{(1)}=2\hat{\sigma}_{2,d}^{2}D_{2,t}\cdot\frac{1}{d-1}-2\hat{ \sigma}_{4,d}^{2}D_{4,t}\cdot\frac{6d+12}{d^{2}-1}\) and \({\lambda_{d}}^{(3)}=4\hat{\sigma}_{4,d}^{2}D_{4,t}\cdot\frac{6d+9}{d^{2}-1}\)._ Eq. (4.3) is a properly defined dynamics for \(w\) because the update rule for \(w_{t}\) only depends on \(w_{t}\) and the quantities \(D_{2,t}\) and \(D_{4,t}\) -- additionally, \(D_{2,t}\) and \(D_{4,t}\) only depend on the distribution of \(w\). As a slight abuse of notation, in the rest of this section, and in Appendix B, we will refer to the \(w_{t}(\chi)\) as particles -- this is well-defined by Lemma 4.4. This does not lead to ambiguity because we no longer need to consider the \(u_{t}(\chi)\) when analyzing the population dynamics. We also use \(\iota\in[-1,1]\) to refer to the first coordinate of \(\chi\), the initialization of a particle under the population dynamics. We note that \(\iota=\langle\chi,e_{1}\rangle\) and therefore, the distribution of \(\iota\) is \(\mu_{d}\). For any time \(t\geq 0\), we use \(w_{t}(\iota)\) to refer to \(w_{t}(\chi)\) for any \(\chi\in\mathbb{S}^{d-1}\). This notation is also well-defined by Lemma 4.4. ### Analysis of One-Dimensional Population Dynamics We divide our proof of Theorem 3.3 into three phases, which are defined as follows. The first phase corresponds to the period of time under which for most initializations \(\iota\), \(w_{t}(\iota)\) is still small. For ease of presentation, we will only consider particles \(w_{t}(\iota)\) for \(\iota>0\) in the following discussion. Our argument also applies for \(\iota<0\) by the symmetry of \(\rho_{t}\) (Lemma 4.3). **Definition 4.5** (Phase 1).: _Let \(w_{\max}=\frac{1}{\log d}\) and \(\iota_{\mathrm{U}}=\frac{\log d}{\sqrt{d}}\). Let \(T_{1}>0\) be the minimum time such that \(w_{T_{1}}(\iota_{\mathrm{U}})=w_{\max}:=\frac{1}{\log d}\). We refer to the time interval \([0,T_{1}]\) as Phase 1._ By tail bounds for \(\mu_{d}\), at initialization, essentially all of the particles are less than \(\iota_{\mathrm{U}}=\frac{\log d}{\sqrt{d}}\) in magnitude. Since the magnitudes of the \(w_{t}(\iota)\) maintain their respective orders under the population dynamics (Proposition B.33), Phase 1 corresponds to the duration under which all but a negligible portion of the particles have \(|w_{t}(\iota)|\leq w_{\max}\). During Phase 1, we have \(D_{2,t},D_{4,t}<0\), and the term corresponding to \(D_{2,t}\) in the velocity (Eq. (4.3)) dominates, so all particles \(w\) grow by essentially the same \(\frac{\sqrt{d}}{(\log d)^{2}}\) factor. Note that the particles do not have a large velocity in absolute terms, and thus the loss does not decrease much during this phase. **Definition 4.6** (Phase 2).: _Let \(T_{2}>0\) be the minimum time such that either \(D_{2,T_{2}}=0\) or \(D_{4,T_{2}}=0\). We refer to the time interval \([T_{1},T_{2}]\) as Phase 2. Note that \(T_{2}>T_{1}\) by Lemma B.7._ In other words, Phase 2 corresponds to the time period after Phase 1, during which \(D_{2,t}\) and \(D_{4,t}\) are still negative. Afterwards, at least one of \(D_{2,t}\) or \(D_{4,t}\) becomes nonnegative. During Phase 2, a large fraction of particles grows to \(\mathrm{poly}(\frac{1}{\log\log d})\). Henceforth, for the rest of Phase 2, all of these particles will have a large velocity, and the loss will decrease quickly for the rest of Phase 2. **Definition 4.7** (Phase 3).: _Phase 3 is defined as the time interval \([T_{2},\infty)\)._ This phase presents two cases: (i) \(D_{2,T_{2}}=0\) and \(D_{4,T_{2}}<0\), or (ii) \(D_{2,T_{2}}<0\) and \(D_{4,T_{2}}=0\). The analyses of the two cases are in Appendix B.5 and Appendix B.6 respectively. We give an overview of both cases below. Our main goal in Phase 3 is to show that if \((D_{2,t},D_{4,t})\) is far from \((0,0)\), then the velocity is large enough to allow a significant decrease in the loss function. Recall that the velocity \(v(w)=-(1-w^{2})(P_{t}(w)+Q_{t}(w))\) defined in Lemma 4.4 is approximately given by \[\frac{dw}{dt}\approx-(1-w^{2})P_{t}(w)=-(1-w^{2})\left(2\hat{\sigma}_{2,d}^{2} D_{2,t}w+4\hat{\sigma}_{4,d}^{2}D_{4,t}w^{3}\right)\,. \tag{4.4}\] **Phase 3, First Case** If \(D_{2,T_{2}}=0\) and \(D_{4,T_{2}}<0\), then for all \(t\geq T_{2}\), we will have \(D_{2,t}\geq 0\) and \(D_{4,t}\leq 0\). (See Lemma B.17 and Lemma B.18.) Lemma B.15 implies that when \(D_{2,t}\geq 0\) and \(D_{4,t}\leq 0\), a \(\text{poly}(\gamma_{2})\) fraction of particles will be far from \(0\) and \(1\), which are two of the roots of the velocity function. However, this does not guarantee a large average velocity -- unlike in Phases 1 and 2, the velocity may have a positive root \(r\in(0,1)\), which can give rise to bad stationary points. As a concrete example, if Assumption 3.2 holds, there exists some \(r\) with \(\frac{1}{\log\log d}\lesssim r\leq 1\) such that if \(\rho\) is the singleton distribution which assigns all of its probability to \(r\), then the velocity under \(\rho\) at \(r\) is exactly \(0\) (Lemma B.31). This is because the velocity of \(r\) under \(\rho\) has a factor \(\hat{\sigma}_{2,d}^{2}(P_{2,d}(r)-\gamma_{2})P_{2,d}{}^{\prime}(r)+\hat{ \sigma}_{4,d}^{2}(P_{4,d}(r)-\gamma_{4})P_{4,d}{}^{\prime}(r)\), and this sixth-degree polynomial in \(r\) has a root between \(\frac{1}{\log\log d}\) and \(1\), by the definitions of \(P_{2,d}\) and \(P_{4,d}\) (Eq. (A.4)). Thus, we must leverage some information about the trajectory to show that the average velocity is large when \(D_{2,t}\) and \(D_{4,t}\) have different signs. To show that the average velocity is large, it suffices to show that the particles are not concentrated around the root, that is, that \(\mathbb{E}_{u\sim\rho_{t}}[(w-r)^{2}]\gtrsim\mathbb{E}_{w,w^{\prime}\sim\rho_ {t}}(w-w^{\prime})^{2}\) is large. However, it is quite possible that for two particles \(w\) and \(w^{\prime}\), the distance \(|w-w^{\prime}|\) decreases over time. Our main technique to control the distance between pairs of particles is as follows. We define a potential function \(\Phi(w):=\log(\frac{w}{\sqrt{1-w^{2}}})\), and we show that \(|\Phi(w)-\Phi(w^{\prime})|\) is always increasing for any two particles \(w,w^{\prime}\) with the same sign, during Phases 1 and 2, as well as during Phase 3 if \(D_{2,T_{2}}=0\) and \(D_{4,T_{2}}<0\) (Lemma B.11). Using the fact that there is a large portion of particles away from \(0\) and \(1\) as long as \(D_{2,t}>0\) and \(D_{4,t}<0\), and because \(\Phi\) is Lispchitz on an interval bounded away from \(0\) and \(1\), we show that the particles which are away from \(0\) and \(1\) also have large variance, implying that the average velocity is large. **Phase 3, Second Case** For the case \(D_{4,T_{2}}=0\) and \(D_{2,T_{2}}<0\), we use a somewhat different argument which also involves \(\Phi\). We again have to deal with the fact that the velocity \(v(w)\) may have a positive root \(r\). The key point is that, due to the constraints \(D_{2,t}\leq 0\) and \(D_{4,t}\geq 0\) which hold during Phase 3, Case 2 (Lemma B.25), it is not possible for all the particles \(w\) to be stuck at the root \(r\). This is because, by the definition of \(D_{2,t}\) and \(D_{4,t}\), we will approximately have \(\mathbb{E}[w^{2}]\leq\gamma_{2}\) and \(\mathbb{E}[w^{4}]\geq\gamma_{4}\), and if nearly all the particles \(w\) are stuck at \(r\), then this will imply that \(\gamma_{4}\approx\gamma_{2}^{2}\). This would lead to a contradiction since Assumption 3.2 states that \(\gamma_{4}\geq 1.1\gamma_{2}^{2}\). However, it is still possible for a large portion of particles \(w\) to be stuck at \(0\) or \(1\), meaning that these particles could have a very small velocity due to the factors \(w\) and \((1-w^{2})\) in \(v(w)\). To deal with such particles, we use the following argument. First, by time \(T_{2}\), nearly all of the particles have grown to at least \(\text{poly}(\frac{1}{\log\log d})\) in absolute value, as mentioned above in the discussion of Phase 2. Additionally, since we approximately have \(\mathbb{E}[w^{4}]\leq\gamma_{4}\) at time \(T_{2}\) since \(D_{4,T_{2}}=0\), we can argue by Markov's inequality that an \(O(\gamma_{4})\) fraction of particles is at most \(\frac{1}{2}\). For the purposes of this overview, we can refer to the set of particles which are larger than \(\text{poly}(\frac{1}{\log\log d})\) and smaller than \(O(\gamma_{4})\) as \(\mathcal{S}_{1}\). We also use \(w_{t}(\iota_{\text{M}})\) to refer to the largest particle in \(\mathcal{S}_{1}\). Our key observation about \(\Phi\) in Phase 3, Case 2 is that for any two particles \(w,w^{\prime}\) with the same sign, \(|\Phi(w)-\Phi(w^{\prime})|\) is in fact _decreasing_ after the time \(T_{2}\) (Lemma B.20). Using this fact, along with our bound on the initial potential difference between any two particles in \(\mathcal{S}_{1}\) (Lemma B.21), we argue that if one of the particles in \(\mathcal{S}_{1}\) is extremely close to \(0\), then all of the particles in \(\mathcal{S}_{1}\) are extremely close to \(0\), and if one of the particles in \(\mathcal{S}_{1}\) is extremely close to \(1\), then all of the particles in \(\mathcal{S}_{1}\) is extremely close to \(1\) (see Lemma B.22 and its proof). By our definition of \(\mathcal{S}_{1}\), at most an \(O(\gamma_{4})\) fraction of particles are not in \(\mathcal{S}_{1}\), meaning that the constraints \(D_{2,t}\leq 0\) and \(D_{4,t}\geq 0\) are violated if either all particles in \(\mathcal{S}_{1}\) are close to \(0\) or all particles in \(\mathcal{S}_{1}\) are close to \(1\) (we make this argument precise in Lemma B.22). Thus, we can show that all the particles in \(\mathcal{S}_{2}\) are far from \(0\) and \(1\) during Phase 3, Case 2. However, it may be the case that all of the particles in \(\mathcal{S}_{1}\) are stuck at \(r\), since \(\mathcal{S}_{1}\) contains at least a \(1-O(\gamma_{4})\) fraction of the particles, but using the constraints \(D_{2,t}\leq 0\) and \(D_{4,t}\geq 0\), the best we can show is that an \(\Omega(\gamma_{4}^{5/4})\) fraction of particles is away from \(r\) (see the proof of Lemma B.29). Thus, in principle, it is possible that \(\mathcal{S}_{1}\), and the set of particles \(w\) which are far from \(r\), are disjoint. We can circumvent this issue by finally considering the particles \(w\) which are larger than \(w_{t}(\iota_{\text{M}})\). In principle, these particles can all be stuck at or near 1. However, when the positive root \(r\) of \(v_{t}(w)\) is less than \(\frac{1}{2}\), then all the particles \(w\) which are close to 1 will be drawn towards \(r\), and will leave 1 at an exponentially fast rate (see the proof of Lemma B.28). Additionally, we show that the positive root \(r\) cannot be larger than \(\frac{1}{2}\) for too long, leveraging our observations about \(\mathcal{S}_{1}\) (see the proof of Lemma B.26). Thus, eventually, a \(1-\operatorname{poly}(\frac{1}{\log\log d})\) fraction of the particles will be away from 0 and 1. At this point, the average velocity of the particles in \(\rho_{t}\) will be large since an \(\Omega(\gamma_{4}^{5/4})\) fraction of the particles is away from \(r\), as mentioned above. We now state our conclusions for each of the phases. Recall that \(T_{*,\epsilon}\) is the amount of time \(t\) needed until the loss \(L(\rho_{t})\) becomes less than \(\frac{1}{2}(\hat{\sigma}_{2,d}^{2}+\hat{\sigma}_{4,d}^{2})\epsilon^{2}\). **Lemma 4.8** (Phase 1).: _Suppose we are in the setting of Theorem 3.3. At the end of phase 1, i.e. at time \(T_{1}\), for all \(\iota\in(0,\iota_{\text{U}})\), we have \(w_{T_{1}}(\iota)\asymp\frac{\sqrt{d}}{(\log d)^{2}}\iota\). Furthermore, \(T_{1}\lesssim\frac{1}{\hat{\sigma}_{2,d}^{2}\gamma^{2}}\log d\). Additionally, we have \(\exp\Big{(}\int_{0}^{T_{1}}2\hat{\sigma}_{2,d}^{2}|D_{2,t}|dt\Big{)}\asymp\frac {\sqrt{d}}{(\log d)^{2}}\)._ We do not make use of the explicit runtime \(T_{1}\) of Phase 1, but we state it for reference. The quantity \(\exp\Big{(}\int_{0}^{T_{1}}2\hat{\sigma}_{2,d}^{2}|D_{2,t}|dt\Big{)}\) is relevant for the analysis of the empirical, finite-width dynamics, as discussed in Section 5. It is approximately the factor by which the \(w_{t}(\iota)\) grow during Phase 1. **Lemma 4.9** (Phase 2).: _Suppose we are in the setting of Theorem 3.3. Let \(T_{2}\) be the minimum time such that either \(D_{2,T_{2}}=0\) or \(D_{4,T_{2}}=0\), i.e. \(T_{2}\) is the end of phase 2. Then, either \(T_{2}-T_{1}\lesssim\frac{(\log\log d)^{18}}{(\hat{\sigma}_{2,d}^{2}+\hat{ \sigma}_{4,d}^{2})}\log\Big{(}\frac{\gamma_{2}}{\epsilon}\Big{)}\) or \(T_{*,\epsilon}-T_{1}\lesssim\frac{(\log\log d)^{18}}{(\hat{\sigma}_{2,d}^{2}+ \hat{\sigma}_{4,d}^{2})}\log\Big{(}\frac{\gamma_{2}}{\epsilon}\Big{)}\)._ This lemma, which summarizes Phase 2, states that within \(\frac{(\log\log d)^{18}}{(\hat{\sigma}_{2,d}^{2}+\hat{\sigma}_{4,d}^{2})}\log \Big{(}\frac{\gamma_{2}}{\epsilon}\Big{)}\) time, either Phase 2 ends, or the loss is less than the desired value \((\frac{1}{2}(\hat{\sigma}_{2,d}^{2}+\hat{\sigma}_{4,d}^{2})\epsilon^{2}\). **Lemma 4.10** (Phase 3, Case 1).: _In the setting of Theorem 3.3, suppose that \(D_{2,T_{2}}=0\) and \(D_{4,T_{2}}<0\) (i.e. Phase 3, Case 1 holds). Then, the total amount of time \(t\geq T_{2}\) such that \(L(\rho_{t})\geq\frac{1}{2}(\hat{\sigma}_{2,d}^{2}+\hat{\sigma}_{4,d}^{2}) \epsilon^{2}\) is at most \(O(\frac{\gamma_{2}}{\hat{\sigma}_{4,d}^{2}\gamma_{2}^{2}\xi^{4}\kappa^{6}} \log(\frac{\gamma_{2}}{\epsilon}))\)._ **Lemma 4.11** (Phase 3, Case 2).: _Suppose we are in the setting of Theorem 3.3. Assume \(D_{4,T_{2}}=0\) and \(D_{2,T_{2}}<0\), i.e. Phase 3 Case 2 holds. Then, at most \(O\Big{(}\frac{1}{\hat{\sigma}_{2,d}^{2}\sigma_{2}^{11/2}\xi^{6}\kappa^{12}} \log\Big{(}\frac{\gamma_{2}}{\epsilon}\Big{)}\Big{)}\) time \(t\geq T_{2}\) elapses while \(L(\rho_{t})\gtrsim(\hat{\sigma}_{2,d}^{2}+\hat{\sigma}_{4,d}^{2})\epsilon^{2}\)._ Combining these lemmas gives Theorem 3.3. ## 5 Analysis of Empirical, Finite-Width Dynamics The first step is to set up a coupling between the finite-width empirical dynamics and the infinite-width population dynamics -- we will eventually show that these two dynamics do not diverge very far, even up to time \(T_{*,\epsilon}\) (defined in Theorem 3.3). Coupling Between Empirical and Population Dynamics.To analyze the difference between the empirical dynamics (resulting in \(\hat{\rho}_{t}\)) and the population dynamics (resulting in \(\rho_{t}\)) we define an intermediate process \(\bar{\rho}_{t}\), which has finite support. Specifically, \(\bar{\rho}_{0}=\hat{\rho}_{0}\), and \(\bar{\rho}_{t}\) will always have support size \(m\), but the particles in \(\bar{\rho}\) are then updated according to the infinite-width population dynamics. We formally define \(\bar{\rho}_{t}\) as follows: \[\bar{\rho}_{0} =\hat{\rho}_{0}=\text{unif}(\{\chi_{1},\dots,\chi_{m}\})\,,\] \[\text{and }\bar{\rho}_{t} =\text{distribution of }u_{t}(\chi)\text{ (where }\chi\sim\bar{\rho}_{0})\,. \tag{5.1}\] In other words, \(\bar{\rho}_{t}\) consists of the particles in \(\rho_{t}\) whose initialization is in \(\{\chi_{1},\dots,\chi_{m}\}\). Now, we define a coupling between \(\bar{\rho}_{t}\) and \(\hat{\rho}_{t}\). For \(\chi\sim\text{unif}(\{\chi_{1},\dots,\chi_{m}\})\), let \(\bar{u}_{t}(\chi)=u_{t}(\chi)\). Let \(\Gamma_{t}\) be the joint distribution of \((\bar{u}_{t}(\chi),\hat{u}_{t}(\chi))\). Then, \(\Gamma_{t}\) forms a natural coupling between \(\bar{\rho}_{t}\) and \(\hat{\rho}_{t}\). We will use \(\bar{u}_{t}\) and \(\hat{u}_{t}\) as shorthands for the random variables \(\bar{u}_{t}(\chi),\hat{u}_{t}(\chi)\) respectively in the rest of this section. Additionally, we define the average distance \(\overline{\Delta}_{t}^{2}:=\mathbb{E}_{(\bar{u}_{t},\bar{u}_{t})\sim\Gamma_{t} }[\|\hat{u}_{t}-\bar{u}_{t}\|_{2}^{2}]\). Intuitively, \(f_{\rho_{t}}(x)\) and \(f_{\bar{\rho}_{t}}(x)\) are close when \(\hat{u}_{t}\) and \(\bar{u}_{t}\) are close, which is formalized by the following lemma: **Lemma 5.1**.: _In the setting of Theorem 3.4, suppose the network width \(m\) satisfies \(m\leq d^{O(\log d)}\), and let \(T\leq\frac{d^{C}}{\sigma_{2,d}^{2}+\sigma_{4,d}^{2}}\) for some universal constant \(C>0\). Then, with probability at least \(1-\exp(-d^{2})\) over the randomness of \(\{\chi_{1},\dots,\chi_{m}\}\), we have for all \(t\leq T\) that_ \[\mathbb{E}_{x\sim\mathbb{S}^{d-1}}[(f_{\rho_{t}}(x)-f_{\bar{\rho}_{t}}(x))^{2} ]\lesssim\frac{(\hat{\sigma}_{2,d}^{2}+\hat{\sigma}_{4,d}^{2})d^{2}(\log d)^{ O(1)}}{m}\,. \tag{5.2}\] _Additionally, for all \(t\geq 0\), we have_ \[\mathbb{E}_{x\sim\mathbb{S}^{d-1}}[(f_{\hat{\rho}_{t}}(x)-f_{\hat{\rho}_{t}}( x))^{2}]\lesssim(\hat{\sigma}_{2,d}^{2}+\hat{\sigma}_{4,d}^{2})\overline{ \Delta}_{t}^{2}\,. \tag{5.3}\] The proof of Lemma 5.1 is deferred to Appendix C. As a simple corollary of Lemma 5.1, we can upper bound \(\mathbb{E}_{x\sim\mathbb{S}^{d-1}}[(f_{\rho_{t}}(x)-f_{\hat{\rho}_{t}}(x))^{2}]\) by the triangle inequality. Thus, so long as \(\overline{\Delta}_{T_{*,\epsilon}}\) is small, we can show that the empirical and population dynamics achieve similar test error at time \(T_{*,\epsilon}\). Upper Bound for \(\overline{\Delta}_{T_{*,\epsilon}}\).The following lemma gives our upper bound on \(\overline{\Delta}_{T_{*,\epsilon}}\): **Lemma 5.2** (Final Bound on \(\overline{\Delta}_{T_{*,\epsilon}}\)).: _In the setting of Theorem 3.4, we have_ \[\overline{\Delta}_{T_{*,\epsilon}}\leq d^{-\frac{\mu-1}{4}}\,. \tag{5.4}\] We prove Lemma 5.2 by induction over the time \(t\leq T_{*,\epsilon}\). Let us define the maximal distance \(\Delta_{\max,t}:=\max_{(\hat{u}_{t},\bar{u}_{t})\in\text{supp}(\Gamma_{t})}\| \hat{u}_{t}-\bar{u}_{t}\|_{2}\) where the max is over the _finite_ support of the coupling \(\Gamma_{t}\). We will control \(\overline{\Delta}_{t}\) and \(\Delta_{\max,t}\) simultaneously using induction. The inductive hypothesis is: **Assumption 5.3** (Inductive Hypothesis).: _For some universal constant \(1>\phi>1/2\) and \(\psi\in(0,\phi-\frac{1}{2})\), we say that the inductive hypothesis holds at time \(T\) if, for all \(0\leq t\leq T\), we have_ \[\overline{\Delta}_{t}\leq d^{-\phi}\quad\text{and}\quad\Delta_{\max,t}\leq d^{ -\psi}\,. \tag{5.5}\] Showing that the inductive hypothesis holds for all \(t\) requires studying the growth of the error \(\|\hat{u}_{t}-\bar{u}_{t}\|\). We analyze it using the following decomposition: 2 Footnote 2: Note that \(\hat{u}_{t}-\bar{u}_{t}\) and \(A_{t},B_{t}\) and \(C_{t}\) implicitly depend on the initialization \(\chi\), and could be written as \(A_{t}(\chi),B_{t}(\chi),C_{t}(\chi),\) and \(\delta_{t}(\chi)\), but we omit this dependence for ease of presentation. \[\frac{d}{dt}\|\hat{u}_{t}-\bar{u}_{t}\|_{2}^{2}= A_{t}+B_{t}+C_{t}\,, \tag{5.6}\] where \(A_{t}:=-2\langle\text{grad}_{\hat{u}}L(\rho_{t})-\text{grad}_{\bar{u}}L(\rho_{t} ),\hat{u}_{t}-\bar{u}_{t}\rangle\), \(B_{t}:=-2\langle\text{grad}_{\hat{u}}L(\hat{\rho}_{t})-\text{grad}_{\hat{u}}L( \rho_{t}),\hat{u}_{t}-\bar{u}_{t}\rangle\), and \(C_{t}=-2\langle\text{grad}_{\hat{u}}\widehat{L}(\hat{\rho}_{t})-\text{grad}_{ \hat{u}}L(\hat{\rho}_{t}),\hat{u}_{t}-\bar{u}_{t}\rangle\). Intuitively, \(A_{t}\) captures the growth of the coupling error due to the population dynamics, \(B_{t}\) captures the growth of the coupling error due to the discrepancy between the gradients from the finite-width and infinite-width networks, and \(C_{t}\) captures the growth of the coupling error due to the discrepancy between finite samples and infinite samples. We will use the following lemma to give an upper bound on each of these three terms. **Lemma 5.4** (Bounds on \(A_{t},B_{t},C_{t}\)).: _In the setting of Theorem 3.4, suppose the inductive hypothesis (Assumption 5.3) holds up to time \(t\), for some \(t\leq T_{*,\epsilon}\). Let \(\delta_{t}:=\hat{u}_{t}-\bar{u}_{t}\). Let \(T_{1}\) be the runtime of Phase 1 (where Phase 1 is defined in Definition 4.5). Then, we have_ \[A_{t}\leq\begin{cases}4\hat{\sigma}_{2,d}^{2}|D_{2,t}|\left\|\delta_{t}\right\| ^{2}+O\big{(}\tfrac{\delta_{2,d}^{2}}{\log d}|D_{2,t}|\left\|\delta_{t}\right\| ^{2}\big{)}&\text{if }0\leq t\leq T_{1}\\ O(1)\cdot(\hat{\sigma}_{2,d}^{2}+\hat{\sigma}_{4,d}^{2})\gamma_{2}\left\| \delta_{t}\right\|^{2}&\text{if }T_{1}\leq t\leq T_{*,\epsilon}\end{cases}. \tag{5.7}\] _Additionally, assuming that the width \(m\) is at most \(d^{O(\log d)}\), we have with probability at least \(1-e^{-d^{2}}\) over the randomness of the initialization \(\hat{\rho}_{0}\) that \(B_{t}\lesssim(\hat{\sigma}_{2,d}^{2}+\hat{\sigma}_{4,d}^{2})\cdot\Big{(}\tfrac {d(\log d)^{O(1)}}{\sqrt{m}}+\overline{\Delta}_{t}\Big{)}\|\delta_{t}\|_{2}\) and \(\mathbb{E}[B_{t}]\lesssim\frac{(\hat{\sigma}_{2,d}^{2}+\hat{\sigma}_{4,d}^{2})d (\log d)^{O(1)}}{\sqrt{m}}\overline{\Delta}_{t}+(\hat{\sigma}_{2,d}^{2}+\hat{ \sigma}_{4,d}^{2})\overline{\Delta}_{t}^{3}\)._ _Finally, assume that the number of samples \(n\leq d^{C}\) and the width \(m\) is equal to \(d^{C}\) for any universal constant \(C>0\). Assume that \(\overline{\Delta}_{t}\leq\frac{1}{\sqrt{d}}\) (i.e. \(\phi>\frac{1}{2}\)) and that \(\psi<\frac{1}{2}\). Then, with probability \(1-\frac{1}{d^{2(\log d)}}\) over the randomness of the dataset, we have \(C_{t}\lesssim(\hat{\sigma}_{2,d}^{2}+\hat{\sigma}_{4,d}^{2})(\log d)^{O(1)} \cdot\Big{(}\sqrt{\frac{d}{n}}\|\delta_{t}\|_{2}+\tfrac{d^{2-2\psi}}{n} \overline{\Delta}\|\delta_{t}\|_{2}^{2}+\tfrac{d^{2.5-2\psi}}{n}\overline{ \Delta}^{2}\|\delta_{t}\|_{2}+\tfrac{d^{2.5-2\psi}}{n}\overline{\Delta}^{2}\| \delta_{t}\|_{2}+\tfrac{d^{2.5-2\psi}}{n}\overline{\Delta}^{2}\|\delta_{t}\|_ {2}+\tfrac{d^{2.5-2\psi}}{n}\overline{\Delta}^{2}\|\delta_{t}\|_{2}+\tfrac{d^{ 2.5-2\psi}}{n}\overline{\Delta}^{2}\|\delta_{t}\|_{2}^{2}\Big{)}\)._ The proof can be found in Appendix C. We give an explanation for each of these bounds below. For \(A_{t}\), we establish two separate bounds, one which holds during Phase 1, and one which holds during Phases 2 and 3. Intuitively, the signal part in a neuron (i.e. \(w^{2}\)) grows at a rate \(4\hat{\sigma}_{2,d}^{2}|D_{2,t}|\) during Phase 1 (which follows from Lemma 4.4) and in Eq. (5.7) we show that the growth rate of the coupling error due to the growth in the signal is also at most \(4\hat{\sigma}_{2,d}^{2}|D_{2,t}|\). Intuitively, the growth rates match because the signal parts of the neurons follow similar dynamics to a power method during Phase 1. For example, in the worst case when \(\delta_{t}\) is mostly in the direction of \(e_{1}\), the factor by which \(\delta_{t}\) grows is the same as that of \(w\). More mathematically, this is due to the fact that the first-order term in Lemma 4.4 is the dominant term in \(v(w)-v(\hat{w})\) for any two particles \(w,\hat{w}\) (where \(v(w)\) is the one-dimensional velocity defined in the previous section). To get a sample complexity better than NTK, the precise constant factor \(4\) in the growth rate \(4\hat{\sigma}_{2,d}^{2}|D_{2,t}|w\) is important -- a larger constant would lead to \(\delta_{T_{1}}\) being larger by a \(\operatorname{poly}(d)\) factor, thus increasing the sample complexity by \(\operatorname{poly}(d)\). The same bound for \(A_{t}\) no longer applies during Phases 2 and 3. This is because the dynamics of \(w\) are no longer similar to a power method, and the higher-order terms may make a larger contribution to \(v(w)-v(\hat{w})\) than to the growth of \(w\), or vice versa. Thus we use a looser bound \(O(1)\cdot(\hat{\sigma}_{2,d}^{2}+\hat{\sigma}_{4,d}^{2})\gamma_{2}\left\|\delta _{t}\right\|^{2}\) in Eq. (5.7). However, since the remaining running time after Phase 1, \(T_{*,\epsilon}-T_{1}\), is very short (only \(\operatorname{poly}(\log\log d)\) -- this corresponds to the second term of the running time in Theorem 3.3), the total growth of the coupling error is at most \(\exp(\operatorname{poly}(\log\log d))\), which is sub-polynomial and does not contribute to the sample complexity. Note that if the running time during Phases 2 and 3 is longer (e.g. \(O(\log d)\)) then the coupling error would grow by an additional \(d^{O(1)}\) factor during Phases 2 and 3, which would make our sample complexity worse than NTK. Our upper bounds on \(B_{t}\) and \(C_{t}\) capture the growth of coupling error due to the finite width and samples. We establish a stronger bound for \(\mathbb{E}[B_{t}]\) than for \(B_{t}\). In our proof, we use the bounds for \(B_{t}\) and \(\mathbb{E}[B_{t}]\) to prove the inductive hypothesis for \(\overline{\Delta}_{t}\) and \(\Delta_{\max,t}\) respectively. We prove the bound for \(C_{t}\) by expanding the error due to finite samples into second-order and fourth-order polynomials and applying concentration inequalities for higher moments (Lemma F.7). All of these bounds together control the growth of \(\|\hat{u}_{t}-\bar{u}_{t}\|\) at time \(t\). Using Lemma 5.4 we can show that for some properly chosen \(\phi\) and \(\psi\), the inductive hypothesis in Assumption 5.3 holds until \(T_{*,\epsilon}\) (see Lemma C.3 for the statement), which naturally leads to the bound in Lemma 5.2. The full proof can be found in Appendix C. Conclusion In this paper, we prove a clear sample complexity separation between vanilla gradient flow and kernel methods with any inner product kernel, including NTK. Our work leads to several directions for future research. The first question is to generalize our results to the ReLU activation. Another question is whether gradient descent can achieve less than \(d^{3}\) sample complexity in our setting. A final open question is whether two-layer neural networks can be shown to attain arbitrarily small generalization error \(\epsilon\) -- one limitation of our analysis is that we require \(\epsilon\geq\text{poly}(1/\log\log d)\). ## Acknowledgments The authors would like to thank Zhiyuan Li for helpful discussions. The authors would like to thank the support of NSF IIS 2045685, NSF Grant CCF-1844628 and a Sloan Research Fellowship.
2304.03698
Deepfake Detection with Deep Learning: Convolutional Neural Networks versus Transformers
The rapid evolvement of deepfake creation technologies is seriously threating media information trustworthiness. The consequences impacting targeted individuals and institutions can be dire. In this work, we study the evolutions of deep learning architectures, particularly CNNs and Transformers. We identified eight promising deep learning architectures, designed and developed our deepfake detection models and conducted experiments over well-established deepfake datasets. These datasets included the latest second and third generation deepfake datasets. We evaluated the effectiveness of our developed single model detectors in deepfake detection and cross datasets evaluations. We achieved 88.74%, 99.53%, 97.68%, 99.73% and 92.02% accuracy and 99.95%, 100%, 99.88%, 99.99% and 97.61% AUC, in the detection of FF++ 2020, Google DFD, Celeb-DF, Deeper Forensics and DFDC deepfakes, respectively. We also identified and showed the unique strengths of CNNs and Transformers models and analysed the observed relationships among the different deepfake datasets, to aid future developments in this area.
Vrizlynn L. L. Thing
2023-04-07T15:33:09Z
http://arxiv.org/abs/2304.03698v1
# Deepfake Detection with Deep Learning: Convolutional Neural Networks versus Transformers ###### Abstract The rapid evolvement of deepfake creation technologies is seriously threatening media information trustworthiness. The consequences impacting targeted individuals and institutions can be dire. In this work, we study the evolutions of deep learning architectures, particularly CNNs and Transformers. We identified eight promising deep learning architectures, designed and developed our deepfake detection models and conducted experiments over well-established deepfake datasets. These datasets included the latest second and third generation deepfake datasets. We evaluated the effectiveness of our developed single model detectors in deepfake detection and cross datasets evaluations. We achieved 88.74%, 99.53%, 97.68%, 99.73% and 92.02% accuracy and 99.95%, 100%, 99.88%, 99.99% and 97.61% AUC, in the detection of FF++ 2020, Google DFD, Celeb-DF, Deeper Forensics and DFDC deepfakes, respectively. We also identified and showed the unique strengths of CNNs and Transformers models and analysed the observed relationships among the different deepfake datasets, to aid future developments in this area. _Keywords -- deepfakes, misinformation, detection, deep learning, convolutional neural networks, transformers, authenticity verification_ ## I Introduction The emergence of deepfake technologies has brought about advancement in the creation of arts [1] and visual effects in films [2][3][4]. At the same time, adversaries are abusing deepfakes for the widespread generation and circulation of misinformation. It is a known fact that digital imagery has a powerful effect on human beings [5]. As such, the ease of generating convincing and manipulative deepfakes is seriously threatening the trustworthiness of information. As these deepfakes targeted at individuals and institutions are made widely and readily available on social media platforms, they can lead to serious political, social, financial and legal consequences [6]. To aid in the research of deepfake detection technologies, several datasets namely UADFV [7], DF-TIMIT [8], FF++ [9], Google DFD [10], DFDC Preview [11], Celeb-DF [12], Deeper Forensics [13] and DFDC [14] have been created using various deepfake generation technologies such as FakeApp [15], Faceswap-GAN [16], Deepfakes-Faceswap [17], FaceFace [18], FaceSwap [19], Neural Textures [20], MMNN Face Swap [21], Neural Talking Heads [22], FSGAN [23], StyleGAN [24]. Based on the release date, synthesis algorithms, quantity and quality of the generated deepfakes, [7][8][9] have been categorized as the first generation deepfake datasets. [10][11][12] as the second-generation datasets, while [13][14] were the third-generation datasets in the research community. Table 1 shows the deepfake datasets with their corresponding statistics, generation methods, and release date. These datasets have helped enable the development and evaluation of machine learning, and in particular, deep learning models for enhanced deepfake detection. In the past few years, deep convolutional neural networks (CNN) have shown their efficacy in continuously pushing the boundaries for better deepfake detection [25][9][26][27][28][29][30][14]. In the 2019-2020 Kaggle Deepfake Detection Challenge [14], the top 3 submissions were based on the EfficientNet B7 [31] and XceptionNet [32] architectures. However, these prior works were often carried out with self-generated and/or different public datasets, lack in rationalizing the choice of the deep learning architecture and presented with either the overall average AUC or accuracy performance metric, which is often not a suitable metric to be used due the severe data imbalance in the datasets. In this work, we aim to study the recent evolution of the deep learning architectures, specifically CNN and Transformers. We will design and implement detection models based on the identified promising architectures and explore their specific contributions to Deepfake detection through a comprehensive experimental evaluation over the latest second and third generations of public Deepfake datasets. This paper is structured as follow. In Section II, we analyse and discuss related works that utilized deep learning models for Deepfake detection. We also shared our key observations in the related works that motivate and shape our work in this paper. In Section III and IV, we discussed the evolution of CNNs and Transformers, respectively. In Section V, we present our proposed work and experiments, and discuss our results. We conclude the paper in Section VI. \begin{table} \begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline \multirow{2}{*}{**Dataset**} & **Number of real videos** & **Number of videos** & \multirow{2}{*}{**Methods**} & **Release Date** \\ \hline UADFV [7] & 49 & 49 & [15] & 2018.11 \\ \hline DF-TIMIT [8] & \multirow{2}{*}{320} & 320 & HQ, 320 LQ & [16] & 2018.12 \\ \cline{1-4} \cline{5-6} FF++ [9] & & \multirow{2}{*}{1000} & \multirow{2}{*}{4000} & [17][18][19] & 2019.01 \\ \cline{1-4} \cline{5-6} Google DFD [10] & & & & 2019.09 \\ \cline{1-4} \cline{5-6} DFDC Preview [11] & & & & 2019.10 \\ \cline{1-4} \cline{5-6} Celeb-DF [12] & & & & 2019.11 \\ \cline{1-4} \cline{5-6} Deeper Forensics [13] & & & & 2019.11 \\ \cline{1-4} \cline{5-6} DFDC [14] & & & & 2020.05 \\ \cline{1-4} \cline{5-6} DFDC [14] & & & & 2020.06 \\ \cline{1-4} \ ## II Related Work Analysis In [25], the authors created four CNN models based on the VGG16 [33], ResNet50, ResNet101 and ResNet152 [34] CNN architectures. The authors used self-generated dataset for their training. Evaluation was carried out on the UADFV and DF-TIMIT datasets. They showed that the AUC detection performance achieved by the generated VGG16, ResNet50, ResNet101 and ResNet152 models were 84.5%, 98.7%, 99.1%, and 97.8%, respectively for the UADFV data, 84.6%, 99.9%, 97.6%, 99.4%, respectively for the DF-TIMIT LQ data, and 57.4%, 93.2%, 86.9% and 91.2%, respectively for the DF-TIMIT HQ data. They also trained the Meso-4 and MesoInception-4 [35] models and tested on the two datasets. However, the trained ResNet50 models were shown to outperform the other models for both datasets. In [9], the authors created CNN models based on the XceptionNet [32] architecture and the FF++ dataset. The trained models achieved detection accuracies of 99.26%, 95.73% and 81% for the FF++ raw, FF++ HQ and FF++ LQ test data, respectively. In [26], the authors utilized the VGG-Face [36] with ResNet50 architecture for its detection model creation. They monitored the neuron behaviors of deep face recognition to detect the fake faces. Evaluation on the FF++, DFDC and Celeb-DF dataset shows AUC performance of 98.5%, 68% and 66.8%, respectively. In [27], the authors addressed the detection problem as a multiple instance learning framework. They treated the faces and video as instances and bag, respectively, and built direct mapping from the instance embeddings to the bag prediction. They utilized the XceptionNet architecture to create their detection models. Evaluation on the FF++ raw, FF++ HQ, FF++ LQ, DFDC Preview and Celeb-DF dataset showed detection accuracies of 99.82%, 98.39%, 92.76%, 85.11% and 98.84%, respectively. In [28], the authors explored two different methods. The first method was based on selecting the entire face as input to the detection. The second method was based on the selection of specific facial regions as inputs. They utilized the XceptionNet architecture to create their detection models. Evaluation of the face input trained models on the UADFV, FF++, DFDC Preview and Celeb-DF dataset showed AUC performance of 100%, 99.4%, 91.17% and 83.6%, respectively. Evaluation results of the eyes, nose, mouth and rest of face input trained models on the UADFV, FF++, DFDC Preview and Celeb-DF dataset are shown in Table 2. The face input trained models outperformed the specific facial region ones across all the four datasets. In [29], the authors leveraged on blending boundaries in forged face images and adopted the HRNet [37] architecture, to create the detection models. Training was carried out on the FF++ dataset, and tested on the FF++, Google DFD, DFDC Preview and Celeb-DF datasets, and the AUC performance was 98.52%, 95.4%, 80.92% and 80.58%, respectively. In [30], the authors exploited the source feature inconsistency within forged images to aid in their detection of deepfakes. They adopted the ResNet34 [34] architecture to generate their detection models. Evaluation on FF++, DFDC Preview and Celeb-DF datasets showed AUC performance of 99.79%, 94.38% and 99.98%, respectively. Cross-dataset evaluation was also carried out. Training was carried out using only the real videos in the FF++ dataset, and tested on the FF++, DFD, DFDC Preview, Celeb-DF, Deeper Forensics and DFDC datasets, and the AUC performance was 99.11%, 99.07%, 74.37%, 90.03%, 99.41% and 67.52%, respectively. In [14], the authors evaluated the top submissions to the DFDC challenge held on Kaggle. The models were trained and tested on the DFDC dataset. The first submission [38] used an ensemble of 7 detection models created based on the EfficientNet B7 [31] architecture, and achieved an AUC performance of 88.2% and log loss of 0.4279. The second submission [39] utilized the XceptionNet architecture and achieved an AUC performance of 88.3% and log loss of 0.4284. The third model [40] used an ensemble of 3 EfficientNet B7 architectures, and achieved an AUC performance of 88% and log loss of 0.4345. The evaluation metric commonly used across these prior works is the AUC performance metric. We compiled the AUC evaluation results according to the CNN architecture, training dataset and testing dataset, and presented them in Table 3. Both [9] and [27] utilized the XceptionNet architecture, but were excluded from this table as they used detection accuracy as their performance metric. However, the AUC evaluation results of XceptionNet based models which considered a wider coverage of datasets in a more recent work [28] helped fill this gap. Thus, we have included [28] in the table instead. From Table 3, we made the following key observations. ### _Key Observations_ * It can be challenging utilizing deep CNN architectures to train detection models on the first generation (Generation 1 / Gen-1) datasets. This is especially true for the UADFV and DF-TIMIT datasets due to their limited size. Thus, researchers often resort to generating their own training data, or devise detection solutions based on approaches that are less reliant on the requirement of having a sufficiently large dataset, which is needed for deep learning. * The Gen-1 datasets have been widely explored in deepfake detection research. They have been used often as the training dataset (especially so for FF++) and/or testing dataset for various prior works, and the relative ease to detect them have also been well-proven, with AUC performance ranging between 98% to 100% across most prior works. Nonetheless, from these prior works, models trained using FF++ and tested on Gen-2 and Gen-3 datasets appear to be relatively weaker in performance. * The FF++ dataset was expanded in 2020 by including another 1000 deepfakes generated using the \begin{table} \begin{tabular}{|l|c|c|c|c|} \hline & **Eyes** & **Nose** & **Mouth** & **Rest** \\ \hline **UADFV** & 99.7\% & 94.7\% & 95.4\% & 97.3\% \\ \hline **FF++** & 92.7\% & 86.3\% & 93.9\% & 85.5\% \\ \hline **DFDC Preview** & 83.9\% & 81.5\% & 79.5\% & 76.5\% \\ \hline **Celeb-DF** & 77.3\% & 64.9\% & 65.1\% & 60.1\% \\ \hline \end{tabular} \end{table} Table 2: AUC Detection Performance of specific facial region based Xception models by Tolosana et al 2021. FaceShifter technique [41]. We shall refer to this dataset as FF\(++\) 2020 in this paper. The referred works in this paper had utilized FF\(++\) or its subset. * The Google DFD dataset was created with deepfake generation technologies that were not disclosed to the public, which could be a reason for it being excluded for consideration in most prior work research. Even with some CNN architecture based prior works (trained using other datasets such as FF\(++\)) showing AUC performance exceeding 95%, this dataset may warrant further exploration due to limited exploration. * The DFDC Preview dataset is an earlier release and subset of the DFDC dataset. Despite that, prior works were having challenges in attempting to achieve a consistently high detection rate when evaluated on the DFDC Preview dataset. During the Kaggle DFDC Challenge, most participants had attempted to create ensembles of detection models based on deep CNN architectures to achieve breakthrough in the detection rates, while trying to stay within the execution testing time limit of 9 hours. A more systematic analysis of CNN models on the DFDC dataset was not explored. Further work to develop and evaluate the efficacies of single models (instead of ensembled models) more thoroughly is needed. * There is limited well-documented cross-dataset evaluation amongst the Gen-3 datasets. For example, there is limited CNN based models trained using Gen-3 datasets and tested on other datasets, as well as models trained using other datasets and tested on Gen-3 datasets. Based on these observations, we had included FF\(++\) 2020, Google DFD, Celeb-DF, Deeper Forensics and DFDC datasets for our work in this paper. \begin{table} \begin{tabular}{|p{34.1pt}|p{34.1pt}|p{34.1pt}|p{34.1pt}|p{34.1pt}|p{34.1pt}|p{34.1pt}|p{34.1pt}|p{34.1pt}|} \hline \multirow{3}{*}{ \begin{tabular}{} \end{tabular} } & \multirow{3}{*}{**CNN Architecture**} & \multirow{3}{*}{**Training Dataset**} & \multicolumn{6}{c|}{**Testing Dataset (AUC Results in \%)**} \\ \cline{3-8} & & & \multicolumn{2}{c|}{**Generation 1**} & \multicolumn{2}{c|}{**Generation 2**} & \multicolumn{2}{c|}{**Generation 3 (Gen-3)**} \\ \cline{3-8} & & & **(Gen-1)** & \multicolumn{2}{c|}{**GF++**} & \multicolumn{2}{c|}{**DFO**} & **Preview** & **Celeb** & **Deeper** & **Preview** & **-DF** & **Forensics** & **DFDC** \\ & & & **[7]** & **[8]** & **[9]** & **[10]** & **[11]** & **[12]** & **[13]** & **[14]** \\ \hline [25] & VGG16 & Self-generated & 84.5 & 84.6 & & & & & & \\ \hline & ResNet50 & Self-generated & 98.7 & 99.9 & & & & & & \\ \hline & ResNet101 & Self-generated & 99.1 & 97.6 & & & & & & \\ \hline & ResNet152 & Self-generated & 97.8 & 99.4 & & & & & & \\ \hline [26] & ResNet50 & FF\(++\), DFDC, Celeb-DF & & & 98.5 & & & & & \\ \hline & ResNet50 & FF\(++\), DFDC, Celeb-DF & & & & & & & 68 \\ \hline [28] & KeeptionNet & Celeb-DF & & & & & & 66.8 & & \\ \hline & & & & 99.4 & & & & & \\ \hline & & & & & 91.7 & & & & \\ \hline & & & & & & & 83.6 & & \\ \hline [29] & HRNet & FF\(++\) & & & & 98.52 & & & & \\ \hline & & FF\(++\) & & & & & 95.4 & & & \\ \hline & & FF\(++\) & & & & & 80.92 & & & \\ \hline [30] & ResNet34 & FF\(++\) & & & & & & 80.58 & & \\ \hline & & & & & 99.79 & & & & \\ \hline & & & DFDC Preview & & & & 94.38 & & & \\ \hline & & Celeb-DF & & & & & 99.98 & & \\ \hline & & FF\(++\) (Real) & & & & 99.11 & & & & \\ \hline & & FF\(++\) (Real) & & & & 99.07 & & & & \\ \hline & & FF\(++\) (Real) & & & & & 74.37 & & & \\ \hline & & FF\(++\) (Real) & & & & & & 90.03 & & \\ \hline & & FF\(++\) (Real) & & & & & & 99.41 & \\ \hline [14] & \begin{tabular}{l} EfficientNet B7 (7-model ensemble) \\ \end{tabular} & DFDC & & & & & & & 67.52 & \\ \hline & XceptionNet & DFDC & & & & & & 88.2 \\ \hline & EfficientNet B7 (3-model ensemble) & DFDC & & & & & & 88.3 \\ \hline \end{tabular} \end{table} Table 3: Deepfake Detection Results of Prior Works utilizing CNN Architectures ## III Evolution of Convolutional Neural Networks In recent years, CNNs have been demonstrated to exhibit outstanding performance through its powerful learning ability in computer vision, image processing benchmarking competitions, and natural language processing tasks [42]. CNNs leverage on multiple feature extraction stages to automatically learn data representations and have a strong ability to capture signal spatiotemporal dependences. Recent advancements focus on research on different activation and loss functions, parameter optimization, regularizations and most importantly, in CNN's architectural innovations. Significant improvements in the representational capacity of deep CNNs were demonstrated through architectural innovations. LeNet was the first CNN architecture and was introduced in 1989 [43]. It was a simple CNN which applied back propagation to handwritten zip code recognition. Since then, other CNN architectures such as AlexNet [44] which won the classification and localization tasks at the Large Scale Visual Recognition Challenge 2012 [45] with a deeper CNN model and more channel consideration, InceptionNet [46] that incorporated multi-scale feature extraction and increased the model width with varying sizes of kernels in parallel (instead of only depth increase), VGG (Visual Geometry Group at University of Oxford) [47] which used an architecture with very small (3x3) convolution filters and pushed the depth to 16-19 weight layers showed significant improvements over prior works in 2014, had also been proposed. In 2016, ResNet [34] was proposed to address the vanishing gradient problem with deeply stacked multi-layers CNNs. Residual connections were introduced to create alternate paths for the gradient to skip the middle layers and reach the initial layers, which allowed extremely deep models with good performance to be trained. In 2017, XceptionNet [32], inspired by InceptionNet, was proposed. In XceptionNet, the Inception modules were replaced by depthwise separable convolutions, and an evaluation on ImageNet showed that XceptionNet was able to achieve better performance over InceptionNet-V3. In 2019, EfficientNet [31] based on the design of balancing the network depth, width and resolution, and a method to uniformly scale all dimensions of depth, width and resolution using a compound coefficient was proposed. A family of eight model architectures, EfficientNet B0 to EfficientNet B7, scaled from the EfficientNet B0 baseline network were created. The authors demonstrated that EfficientNet B7 achieved the highest top-1 accuracy of 84.3% compared to InceptionNet-V4 at 80.0%, XceptionNet at 79%, ResNet152 at 77.8%, and ResNet50 at 76%, on the ImageNet data. In 2020, HRNet [37] was proposed to maintain high-resolution representations to form a stronger backbone for computer vision problems that are position-sensitive. These computer vision problems include object detection, semantic segmentation and human pose estimation. The authors explained that this approach is unlike CNNs such as ResNet and VGG, where the input image was first encoded as a low-resolution representation through a sub-network formed by connecting high-to-low resolution convolutions in series, and the high-resolution representation was recovered from the encoded low-resolution representation. Instead, in HRNet, the high-to-low resolution convolution streams were connected in parallel, and information was exchanged repeatedly across the resolutions. The authors demonstrated that their work delivered top results in semantic segmentation and facial landmark detection, compared to prior works. ## IV Evolution of Transformers In 2017, Transformers [48], primarily and initially created for the purpose of performing natural language processing (NLP) tasks, were proposed. However, Transformers saw limited applications to computer vision problems then. For such problems, attention was either applied with CNNs or to replace certain components of CNNs. The overall CNN architecture often remained intact. In 2020, the vision transformer, VIT [49], was proposed to demonstrate that the direct application of a pure transformer to sequences of image patches, can perform well on image classification tasks. The authors demonstrated that when pre-trained on a large dataset and transferred to smaller image recognition tasks, the VIT model can attain excellent results compared to CNNs such as ResNet. In 2021, the Bidirectional Encoder representation from image Transformers (BEiT) [50], a self-supervised vision representation model was proposed. The authors proposed pretraining the vision transformers with a masked image modeling task, where the original image is first tokenized into visual tokens, randomly masked and fed into the backbone transformer. The pre-training objective was to recover the original visual tokens based on the masked image patches. The model parameters are then fine-tuned on downstream tasks by appending the task layers upon the pre-trained encoder. The authors demonstrated that the BEiT model can achieve 83.2% top-1 accuracy on ImageNet. In the same year, the Swin transformer [51], which built hierarchical feature maps through image patch merging in deeper layers, was proposed. It achieved linear computation complexity to input image size as the self-attention was computed within each local window while allowing cross-window connection. This approach contrasted with previous vision transformers, which produced feature maps of a single low resolution and had quadratic computation complexity to input image size due to global self-attention computation. The authors demonstrated that the Swin model can achieve 84.5% top-1 accuracy, compared to EfficientNet B7 at 84.3%, and VIT at 77.9% on ImageNet. The Class-attention in image Transformers (CaiT) [52] was also proposed to build and optimize deeper transformer networks, specifically for image classification. The authors added a learnable diagonal matrix on the output of each residual block, to allow training of deeper high-capacity image transformers that benefit from depth. They had also separated the transformer layers involving self-attention between patches, from class-attention layers devoted to processed patches' content extraction into a single vector, to avoid contradictory objective of guiding the attention process while processing class embedding. The authors demonstrated that the CaiT model can attain 86.5% top-1 accuracy on ImageNet. ## V Proposed Work and Experimental Results In this work, we proposed the following approach for the development and evaluation of our deepfake detection deep learning models. We first performed a dataset split of each of the five datasets into the respective non-overlapping training, validation and testing data. Except for the DFDC dataset where the training, validation and testing splits were already provided, we split the videos in the other datasets to have at least 10% for validation and 20% for testing, leaving around 70% for the training. We also ensured that specific source videos used to create deepfake videos were kept to within the same split. Thus, this action resulted in the differences in percentage of distributions split across real and fakes while we kept to the minimal percentages as mentioned above. Next, we extracted the frames from the real and deepfake videos, and performed facial detection and extraction, to create balanced real vs fakes in the training and validation datasets. The testing dataset was retained as video data and during the detection process, the video frames and facial regions were extracted, for detections to obtain the test results. The results across the analysed frames were averaged for each video. The splits of the videos in each dataset are shown in Table 4. We chose ResNet152, XceptionNet, EfficientNet B7 and HRNet as the CNN architectures, and VIT, BEiT, Swin and CaiT as the Transformers, as the underlying architecture for creating our detection models. For each architecture, we implemented the same data pre-processing and training process, and trained each deep learning model for 50 epochs. Finally, we selected the model with the highest validation accuracy under each architecture, for evaluations with the testing dataset. We also performed cross datasets evaluation tests. The balanced accuracy and AUC results are shown in Table 5 and Table 6, respectively. the DFDC dataset performed very well on the Google DFD test dataset. The ResNet152, EfficientNet B7, BeiT and Swin models achieved 90.51% to 96.2% accuracies and 96.94% to 99.76% AUC, when tested on the Google DFD test dataset. The eight models achieved an average 88.46% accuracy and 96.66% AUC when tested on the Google DFD dataset. The models, except for CaiT, also performed well on the Celeb-DF test dataset, and can achieve 75.12% to 89.27% accuracies and 90.21% to 97.14% AUC. The EfficientNet B7, VIT, BeiT and Swin models achieved 70.26% to 79.4% accuracies and 93.74% to 98.36% AUC when tested on the Deeper Forensics test dataset. Thus, models trained with the DFDC dataset were able to perform well across all the other test datasets, except for FF++ 2020. **Further Analysis** In general, the HRNet, XceptionNet, EfficientNet B7 CNNs models performed very well consistently when trained \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|} \hline \multirow{2}{*}{\begin{tabular}{} \end{tabular} } & \multirow{2}{*}{\begin{tabular}{} \end{tabular} } & \multicolumn{4}{c|}{**Test Data**} \\ \cline{3-6} & & \multicolumn{4}{c|}{**(AUC Results in \%)**} \\ \cline{3-6} & & \multicolumn{1}{c|}{**DL**} & \multicolumn{1}{c|}{**F++**} & \multicolumn{1}{c|}{**Google**} & \multicolumn{1}{c|}{**Celeb**} & \multicolumn{1}{c|}{**Deeper**} \\ & & **2020** & **DFD** & **-DF** & **Forensics** & \multicolumn{1}{c|}{} \\ \hline \multirow{9}{*}{**FF++**} & ResNet & 99.21 & 87.59 & 78.59 & 42.42 & 68.37 \\ \cline{2-6} & Xception & 99.70 & 91.24 & 84.20 & 39.44 & 65.18 \\ \cline{2-6} & Efficient & 99.45 & 92.04 & **87.54** & 46.88 & 68.44 \\ \cline{2-6} & HRNet & **99.95** & 92.27 & 77.93 & 16.98 & 62.26 \\ \cline{2-6} & VIT & 92.29 & 84.83 & 84.35 & **86.02** & **76.30** \\ \cline{2-6} & BeiT & 98.76 & 92.54 & 84.80 & 73.63 & 70.41 \\ \cline{2-6} & Swin & 98.81 & 92.25 & 72.57 & 18.51 & 65.70 \\ \cline{2-6} & CaiT & 99.14 & **94.36** & 84.52 & 74.46 & 62.38 \\ \hline \multirow{9}{*}{\begin{tabular}{} \end{tabular} } & ResNet & 60.46 & 99.91 & 79.60 & 85.62 & 69.67 \\ \cline{2-6} & Xception & 57.37 & 99.90 & 90.55 & 92.89 & **78.48** \\ \cline{2-6} & Efficient & 63.34 & 99.94 & 86.71 & 75.29 & 73.44 \\ \cline{2-6} & HRNet & 61.80 & **100.00** & **92.61** & 83.39 & 74.62 \\ \cline{2-6} & VIT & 62.60 & 98.11 & 81.97 & 88.68 & 72.56 \\ \cline{2-6} & BeiT & 63.82 & **100.00** & 90.05 & 93.20 & 76.79 \\ \cline{2-6} & Swin & **67.35** & 99.63 & 86.84 & 78.64 & 71.70 \\ \cline{2-6} & CaiT & 58.84 & **99.92** & 88.91 & **96.03** & 70.96 \\ \hline \multirow{9}{*}{\begin{tabular}{} \end{tabular} } & ResNet & 65.68 & 86.30 & 99.65 & 82.12 & 70.24 \\ \cline{2-6} & Xception & 65.61 & 89.32 & 99.77 & 85.76 & **70.46** \\ \cline{2-6} & Efficient & 65.88 & 88.55 & **99.88** & 76.26 & 69.42 \\ \cline{2-6} & HRNet & 64.12 & 81.76 & 99.87 & 71.13 & 67.25 \\ \cline{2-6} & VIT & 65.08 & 79.87 & 97.54 & **88.32** & 69.12 \\ \cline{2-6} & BEiT & 63.41 & 88.09 & 98.37 & 71.34 & 65.85 \\ \cline{2-6} & Swin & **66.52** & **91.20** & 98.84 & 74.48 & 63.91 \\ \cline{2-6} & CaiT & 61.66 & 84.14 & 96.76 & 65.45 & 59.42 \\ \hline \multirow{9}{*}{ \begin{tabular}{} \end{tabular} } & ResNet & 41.43 & 54.56 & 50.32 & 99.99 & 55.69 \\ \cline{2-6} & Xception & 40.36 & 49.28 & **55.71** & 99.99 & 53.95 \\ \cline{2-6} & Efficient & 40.88 & 53.79 & 50.99 & 99.99 & 56.65 \\ \cline{2-6} & HRNet & 40.71 & **58.18** & 52.76 & 99.99 & 58.60 \\ \cline{2-6} & VIT & **43.46** & 49.52 & 48.25 & 99.99 & 59.14 \\ \cline{2-6} & BEiT & 41.30 & 52.50 & 49.38 & 99.99 & **60.74** \\ \cline{2-6} & Swin & 38.79 & 51.85 & 48.93 & 99.99 & 53.15 \\ \cline{2-6} & CaiT & 41.94 & 56.94 & 54.19 & 99.99 & 59.85 \\ \cline{2-6} & ResNet & 73.34 & 96.94 & 90.21 & 95.33 & 94.81 \\ \cline{2-6} & Xception & 71.19 & 95.69 & 95.21 & 93.02 & 95.96 \\ \cline{2-6} & Efficient & **74.22** & 98.42 & **97.14** & 93.74 & **97.61** \\ \cline{2-6} & HRNet & 69.71 & 94.85 & 95.57 & 90.78 & 94.97 \\ \cline{2-6} & VIT & 69.39 & 92.51 & 93.35 & **98.36** & 93.41 \\ \cline{2-6} & VEit & 63.89 & 99.42 & 96.22 & 97.37 & 95.90 \\ \cline{2-6} & Swin & 72.02 & **99.76** & 92.20 & 95.60 & 92.62 \\ \cline{2-6} & CaiT & 70.06 & 95.72 & 89.87 & 95.01 & 85.13 \\ \hline \end{tabular} \end{table} Table 6: AUC Results of Our Detection Models \begin{table} \begin{tabular}{|l|l|l|l|l|l|l|} \hline \multirow{2}{*}{\begin{tabular}{} \end{tabular} } & \multirow{2}{*}{ \begin{tabular}{} \end{tabular} } & \multicolumn{4}{c|}{**Test Data**} \\ \cline{3-6} & & \multicolumn{4}{c|}{**(Balanced Accuracy Results in \%)**} \\ \cline{3-6} & & \multicolumn{1}{c|}{**DL**} & \multicolumn{1}{c|}{**F++**} & \multicolumn{1}{c|}{**Google**} & \multicolumn{1}{c|}{**Celeb**} & \multicolumn{1}{c|}{**Couple**} & \multicolumn{1}{c|}{**Couple**} & \multicolumn{1}{c|}{**Couple**} & \multicolumn{1}{c|}{**Couple**} \\ \cline{3-6} & & and tested on the same dataset. However, HRNet models did not work very well on cross datasets evaluations. On the other hand, the XceptionNet, and the VIT, BeiT and Swin Transformers models performed better on cross datasets evaluations. Thus, the CNNs models did better in the same train-to-test dataset evaluations, while the Transformers models performed better in the cross datasets evaluations. Overall, the XceptionNet models did well in both aspects. Based on the cross datasets evaluations and overall models' performance, we observed closer relationships between the FF\(++\) 2020 and Celeb-DF datasets, and the Google DFD and Celeb-DF datasets. However, even though the models trained on FF\(++\) 2020 worked very well on the Celeb-DF test dataset, it did not for the other way round. This implied that the relationship was one-directional, and the FF\(++\) 2020 dataset could be more diverse and possessed more super-set characteristics over the Celeb-DF dataset. This property may have been made prominent with the updated version of FF\(++\) in 2020. Models trained on the Deeper Forensics dataset fared poorly when tested on the other datasets. Some models trained using the DFDC dataset worked well when tested on the Deeper Forensics test dataset. Models trained on the DFDC dataset also performed very well on the other datasets, except for the FF\(++\) 2020 test dataset. This implied that even as the DFDC dataset was quite diverse and comprehensive, it was still distinct from the FF\(++\) 2020 dataset. Thus, the Deeper Forensics, DFDC and FF\(++\) 2020 datasets remain important and warrant further investigations in future works along with newer and more sophisticated deepfakes datasets. ## VI Conclusions In this work, we explored the second and third generation public deepfake datasets, and investigated how the evolving deep learning landscape will benefit deepfake detections. We designed and developed single model deepfake detectors based on eight different deep learning architectures (four CNNs and four Transformers) and conducted same train-to-test and cross datasets evaluations. With our detection models, we achieved 88.74%, 99.53%, 97.68%, 99.73% and 92.02% accuracy and 99.95%, 100%, 99.88%, 99.99% and 97.61% AUC, in the detection of FF\(++\) 2020, Google DFD, Celeb-DF, Deeper Forensics and DFDC deepfakes, respectively. We observed that CNNs models did better in same train-to-test dataset evaluations, and the Transformers models did better in cross datasets evaluations. Based on further cross datasets evaluations and overall models' performance analysis, we showed the relationships among the FF\(++\) 2020, Google DFD and Celeb-DF datasets. We also observed the strengths and uniqueness of the Deeper Forensics, DFDC and FF\(++\) 2020 datasets and how they may continue to be important and warrant further investigations in future works along with newer and more sophisticated deepfakes datasets.
2303.01028
Specformer: Spectral Graph Neural Networks Meet Transformers
Spectral graph neural networks (GNNs) learn graph representations via spectral-domain graph convolutions. However, most existing spectral graph filters are scalar-to-scalar functions, i.e., mapping a single eigenvalue to a single filtered value, thus ignoring the global pattern of the spectrum. Furthermore, these filters are often constructed based on some fixed-order polynomials, which have limited expressiveness and flexibility. To tackle these issues, we introduce Specformer, which effectively encodes the set of all eigenvalues and performs self-attention in the spectral domain, leading to a learnable set-to-set spectral filter. We also design a decoder with learnable bases to enable non-local graph convolution. Importantly, Specformer is equivariant to permutation. By stacking multiple Specformer layers, one can build a powerful spectral GNN. On synthetic datasets, we show that our Specformer can better recover ground-truth spectral filters than other spectral GNNs. Extensive experiments of both node-level and graph-level tasks on real-world graph datasets show that our Specformer outperforms state-of-the-art GNNs and learns meaningful spectrum patterns. Code and data are available at https://github.com/bdy9527/Specformer.
Deyu Bo, Chuan Shi, Lele Wang, Renjie Liao
2023-03-02T07:36:23Z
http://arxiv.org/abs/2303.01028v1
# Specformer: Spectral Graph Neural Networks Meet Transformers ###### Abstract Spectral graph neural networks (GNNs) learn graph representations via spectral-domain graph convolutions. However, most existing spectral graph filters are scalar-to-scalar functions, i.e., mapping a single eigenvalue to a single filtered value, thus ignoring the global pattern of the spectrum. Furthermore, these filters are often constructed based on some fixed-order polynomials, which have limited expressiveness and flexibility. To tackle these issues, we introduce Specformer, which effectively encodes the set of all eigenvalues and performs self-attention in the spectral domain, leading to a learnable set-to-set spectral filter. We also design a decoder with learnable bases to enable non-local graph convolution. Importantly, Specformer is equivariant to permutation. By stacking multiple Specformer layers, one can build a powerful spectral GNN. On synthetic datasets, we show that our Specformer can better recover ground-truth spectral filters than other spectral GNNs. Extensive experiments of both node-level and graph-level tasks on real-world graph datasets show that our Specformer outperforms state-of-the-art GNNs and learns meaningful spectrum patterns. Code and data are available at [https://github.com/bdy9527/Specformer](https://github.com/bdy9527/Specformer). ## 1 Introduction Graph neural networks (GNNs), firstly proposed in (Scarselli et al., 2008), become increasingly popular in the field of machine learning due to their empirical successes. Depending on how the graph signals (or features) are leveraged, GNNs can be roughly categorized into two classes, namely spatial GNNs and spectral GNNs. Spatial GNNs often adopt a message passing framework (Gilmer et al., 2017; Battaglia et al., 2018), which learns useful graph representations via propagating local information on graphs. Spectral GNNs (Bruna et al., 2013; Defferrard et al., 2016) instead perform graph convolutions via spectral filters (_i.e._, filters applied to the spectrum of the graph Laplacian), which can learn to capture non-local dependencies in graph signals. Although spatial GNNs have achieved impressive performances in many domains, spectral GNNs are somewhat under-explored. There are a few reasons why spectral GNNs have not been able to catch up. First, most existing spectral filters are essentially scalar-to-scalar functions. In particular, they take a single eigenvalue as input and apply the same filter to all eigenvalues. This filtering mechanism could ignore the rich information embedded in the spectrum, _i.e._, the set of eigenvalues. For example, we know from the spectral graph theory that the algebraic multiplicity of the eigenvalue \(0\) tells us the number of connected components in the graph. However, such information can not be captured by scalar-to-scalar filters. Second, the spectral filters are often approximated via fixed-order (or truncated) orthonormal bases, _e.g._, Chebyshev polynomials (Defferrard et al., 2016; He et al., 2022) and graph wavelets (Hammond et al., 2011; Xu et al., 2019), in order to avoid the costly spectral decomposition of the graph Laplacian. Although the orthonormality is a nice property, this truncated approximation is less expressive and may severely limit the graph representation learning. Therefore, in order to improve spectral GNNs, it is natural to ask: _how can we build expressive spectral filters that can effectively leverage the spectrum of graph Laplacian?_ To answer this question, we first note that eigenvalues of graph Laplacian represent the frequency, _i.e._, total variation of the corresponding eigenvectors. The magnitudes of frequencies thus convey rich information. Moreover, the relative difference between two eigenvalues also reflects important frequency information, _e.g._, the spectral gap. To capture both magnitudes of frequency and relative frequency, we propose a Transformer (Vaswani et al., 2017) based set-to-set spectral filter, termed _Specformer_. Our Specformer first encodes the range of eigenvalues via positional embedding and then exploits the self-attention mechanism to learn relative information from the set of eigenvalues. Relying on the learned representations of eigenvalues, we also design a decoder with a bank of learnable bases. Finally, by combining these bases, Specformer can construct a permutation-equivariant and non-local graph convolution. In summary, our contributions are as follows: * We propose a novel Transformer-based set-to-set spectral filter along with learnable bases, called Specformer, which effectively captures both magnitudes and relative differences of all eigenvalues of the graph Laplacian. * We show that Specformer is permutation equivariant and can perform non-local graph convolutions, which is non-trivial to achieve in many spatial GNNs. * Experiments on synthetic datasets show that Specformer learns to better recover the given spectral filters than other spectral GNNs. * Extensive experiments on various node-level and graph-level benchmarks demonstrate that Specformer outperforms state-of-the-art GNNs and learns meaningful spectrum patterns. ## 2 Related Work Existing GNNs can be roughly divided into two categories: spatial and spectral GNNs. Spatial GNNs.Spatial GNNs like GAT (Velickovic et al., 2018) and MPNN (Gilmer et al., 2017) leverage message passing to aggregate local information from neighborhoods. By stacking multiple layers, spatial GNNs can possibly learn long-range dependencies but suffer from over-smoothing (Oono and Suzuki, 2020) and over-squashing (Topping et al., 2022). Therefore, how to balance local and global information is an important research topic for spatial GNNs. We refer readers to (Wu et al., 2021; Zhou et al., 2020; Liao, 2021) for a more detailed discussion about spatial GNNs. Spectral GNNs.Spectral GNNs (Ortega et al., 2018; Dong et al., 2020; Wu et al., 2019; Zhu et al., 2021; Bo et al., 2021; Chang et al., 2021; Yang et al., 2022) leverage the spectrum of graph Laplacian to perform convolutions in the spectral domain. A popular subclass of spectral GNNs leverages different kinds of orthogonal polynomials to approximate arbitrary filters, including Monomial (Chien et al., 2021), Chebyshev (Defferrard et al., 2016; Kipf and Welling, 2017; He et al., 2022), Bernstein (He et al., 2021), and Jacobi (Wang and Zhang, 2022). Relying on the diagonalization of symmetric matrices, they avoid direct spectral decomposition and guarantee localization. However, all such polynomial filters are scalar-to-scalar functions, and the bases are pre-defined, which limits their expressiveness. Another subclass requires either full or partial spectral decomposition, such as SpectralCNN (Estrach et al., 2014) and LanczosNet (Liao et al., 2019). They parameterize the spectral filters by neural networks, thus being more expressive than truncated polynomials. However, such spectral filters are still limited as they do not capture the dependencies among multiple eigenvalues. Graph Transformer.Transformers and GNNs are closely relevant since the attention weights of Transformer can be seen as a weighted adjacency matrix of a fully connected graph. Graph Transformers Dwivedi and Bresson (2020) combine both and have gained popularity recently. Graphormer (Ying et al., 2022), SAN (Kreuzer et al., 2021), and GPS (Rampasek et al., 2022) design powerful positional and structural embeddings to further improve their expressive power. Graph Transformers still belong to spatial GNNs, although the high-cost self-attention is non-local. The limitation of spatial attention compared to spectral attention has been discussed in (Bastos et al., 2022). ## 3 Background In this section, we introduce some preliminaries of graph signal processing (GSP) (Ortega et al., 2018) and Transformer (Vaswani et al., 2017). Preliminary.Assume that we have a graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\), where \(\mathcal{V}\) denotes the node set with \(|\mathcal{V}|=n\) and \(\mathcal{E}\) is the edge set. The corresponding adjacency matrix \(\mathbf{A}\in\{0,1\}^{n\times n}\), where \(A_{ij}=1\) if there is an edge between nodes \(i\) and \(j\), and \(A_{ij}=0\) otherwise. The normalized graph Laplacian matrix is defined as \(\mathbf{L}=\mathbf{I}_{n}-\mathbf{D}^{-1/2}\mathbf{A}\mathbf{D}^{-1/2}\), where \(\mathbf{I}_{n}\) is the \(n\times n\) identity matrix and \(\mathbf{D}\) is the diagonal degree matrix with diagonal entries \(D_{ii}=\sum_{j}A_{ij}\) for all \(i\in\mathcal{V}\) and off-diagonal entries \(D_{ij}=0\) for \(i\neq j\). We assume \(\mathcal{G}\) is undirected. Hence, \(\mathbf{L}\) is a real symmetric matrix, whose spectral decomposition can be written as \(\mathbf{L}=\mathbf{U}\mathbf{\Lambda}\mathbf{U}^{\top}\), where the columns of \(\mathbf{U}\) are the eigenvectors and \(\mathbf{\Lambda}=\text{diag}([\lambda_{1},\lambda_{2},\ldots,\lambda_{n}])\) are the corresponding eigenvalues ranged in \([0,2]\). Graph Signal Processing (GSP).Spectral GNNs rely on several important concepts from GSP, namely, spectral filtering, graph Fourier transform and its inverse. The graph Fourier transform is written as \(\hat{\mathbf{x}}=\mathbf{U}^{\top}\mathbf{x}\), where \(\mathbf{x}\in\mathbb{R}^{n\times 1}\) is a graph signal and \(\hat{\mathbf{x}}\in\mathbb{R}^{n\times 1}\) represents the Fourier coefficients. Then a spectral filter \(G_{\theta}\) is used to scale \(\hat{\mathbf{x}}\). Finally, the inverse graph Fourier transform is applied to yield the filtered signal in spatial domain \(\hat{\mathbf{x}}=\mathbf{U}G_{\theta}\hat{\mathbf{x}}\). The key task in GSP is to design a powerful spectral filter \(G_{\theta}\) so that we can exploit the useful frequency information. Transformer.Transformer is a powerful deep learning model, which is widely used in natural language processing (Devlin et al., 2019), vision (Dosovitskiy et al., 2021), and graphs (Ying et al., 2022; Rampasek et al., 2022). Each Transformer layer consists of two components: a multi-head self-attention (MHA) module and a token-wise feed-forward network (FFN). Given the input representations \(\mathbf{H}=[\mathbf{h}_{1}^{\top},\ldots,\mathbf{h}_{n}^{\top}]\in\mathbb{R}^{n\times d}\), where \(d\) is the hidden dimension, MHA first projects \(\mathbf{H}\) into query, key and value through three matrices (\(\mathbf{W}^{Q}\), \(\mathbf{W}^{K}\) and \(\mathbf{W}^{V}\)) to calculate attentions. And FFN is then used to add transformation. The model can be written as follows where we denote the query dimension as \(d_{q}\). For simplicity, we omit the bias and the description of multi-head attention. \[\text{Attention}(\mathbf{Q},\mathbf{K},\mathbf{V})=\text{Softmax}(\frac{\mathbf{ Q}\mathbf{K}^{\top}}{\sqrt{d_{q}}})\mathbf{V}, \tag{1}\] \[\mathbf{Q}=\mathbf{H}\mathbf{W}^{Q},\;\mathbf{K}=\mathbf{H}\mathbf{W}^{K},\;\mathbf{V}=\mathbf{H} \mathbf{W}^{V}.\] ## 4 Specformer In this section, we introduce our Specformer model. Fig. 1 illustrates the model architecture. We first explain how we encode the eigenvalues and use Transformer to capture their dependencies to yield useful representations. Then we turn to the decoder, which learns new eigenvalues from the representations and reconstructs the graph Laplacian matrix for graph convolution. Finally, we discuss the relationship between our Specformer and other methods, including MPNNs, polynomial GNNs, and graph Transformers. ### Eigenvalue Encoding We design a powerful set-to-set spectral filter using Transformer to leverage both magnitudes and relative differences of all eigenvalues. However, the expressiveness of self-attention will be restricted heavily if we directly use the scalar eigenvalues to calculate the attention maps. Therefore, it is important to find a suitable function, \(\rho(\lambda):\mathbb{R}^{1}\rightarrow\mathbb{R}^{d}\), to map each eigenvalue from a scalar to a meaningful vector. We use an eigenvalue encoding function as follows. \[\rho(\lambda,2i) =\sin(\epsilon\lambda/10000^{2i/d}), \tag{2}\] \[\rho(\lambda,2i+1) =\cos(\epsilon\lambda/10000^{2i/d}),\] where \(i\) is the dimension of the representations and \(\epsilon\) is a hyperparameter. The benefits of \(\rho(\lambda)\) are three-fold: (1) It can capture the relative frequency shifts of eigenvalues and provides high-dimension Figure 1: Illustration of the proposed Specformer. vector representations. (2) It has the wavelengths from \(2\pi\) to \(10000\cdot 2\pi\), which forms a multi-scale representation for eigenvalues. (3) It can control the influence of \(\lambda\) by adjusting the value of \(\epsilon\). The choice of \(\epsilon\) is crucial because we find that only the first few dimensions of \(\rho(\lambda)\) can distinguish different eigenvalues if we simply set \(\epsilon=1\). The reason is that eigenvalues lie in the range \([0,2]\), and the value of \(\lambda/10000^{2i/d}\) will change slightly when \(i\) becomes larger. Therefore, it is important to assign a large value of \(\epsilon\) to enlarge the influence of \(\lambda\). Experiments can be seen in Appendix C.1. Notably, although the eigenvalue encoding (EE) is similar to the positional encoding (PE) of Transformer, they act quite differently. PE describes the information of discrete positions in the spatial domain. While EE represents the information of continuous eigenvalues in the spectral domain. Applying PE to the spatial positions (_i.e._, indices) of eigenvalues will destroy the permutation equivariance property, thereby impairing the learning ability. The initial representations of eigenvalues are the concatenation of eigenvalues and their encodings, \(\mathbf{Z}=[\lambda_{1}\|\rho(\lambda_{1}),\cdots,\lambda_{n}\|\rho(\lambda_{n}) ]^{\top}\in\mathbb{R}^{n\times(d+1)}\). Then a standard Transformer block is used to learn the dependency between eigenvalues. We first apply layer normalization (LN) on the representations before feeding them into other sub-layers, _i.e._, MHA and FFN. This _pre-norm_ trick has been used widely to improve the optimization of Transformer (Ying et al., 2022): \[\tilde{\mathbf{Z}} =\text{MHA}(\text{LN}(\mathbf{Z}))+\mathbf{Z}, \tag{3}\] \[\hat{\mathbf{Z}} =\text{FFN}(\text{LN}(\tilde{\mathbf{Z}}))+\tilde{\mathbf{Z}}.\] After stacking multiple Transformer blocks, we obtain the expressive representations of the spectrum. ### Eigenvalue Decoding Based on the representations returned by the encoder, the decoder can learn new eigenvalues for spectral filtering. Recent studies (Yang et al., 2022; Wang and Zhang, 2022) show that assigning each feature dimension a separate spectral filter improves the performance of GNNs. Motivated by this discovery, our decoder first decodes several bases. An FFN is then used to combine these bases to construct the final graph convolution. Spectral filters.In general, the bases should learn to cover different information of the graph signal space as much as possible. For this purpose, we utilize the multi-head attention mechanism because each head has its own self-attention module. Specifically, the representations learned by each head will be fed into the decoder to perform spectral filtering to get the new eigenvalues. \[\mathbf{Z}_{m}=\text{Attention}(\mathbf{Q}\mathbf{W}_{m}^{Q},\mathbf{K}\mathbf{W}_{m}^{K},\mathbf{V} \mathbf{W}_{m}^{V}),\quad\mathbf{\lambda}_{m}=\phi(\mathbf{Z}_{m}\mathbf{W}_{\lambda}), \tag{4}\] where \(\mathbf{Z}_{m}\) denotes the representations learned by the \(m\)-th heads, and \(\phi\) is the activation, _e.g._, ReLU or Tanh, which is optional. \(\mathbf{\lambda}_{m}\in\mathbb{R}^{n\times 1}\) is the \(m\)-th eigenvalues after the spectral filtering. Learnable bases.After get \(M\) filtered eigenvalues, we use a FFN: \(\mathbb{R}^{M+1}\rightarrow\mathbb{R}^{d}\) to construct the learnable bases. We first reconstruct individual new bases, concatenate them along the channel dimension, and feed them to a FFN as below, \[\mathbf{S}_{m}=\mathbf{U}\text{diag}(\mathbf{\lambda}_{m})\mathbf{U}^{\top},\quad\hat{\mathbf{S}} =\text{FFN}([\mathbf{I}_{n}||\mathbf{S}_{1}||\cdots||\mathbf{S}_{M}]), \tag{5}\] where \(\mathbf{S}_{m}\in\mathbb{R}^{n\times n}\) is the \(m\)-th new basis and \(\hat{\mathbf{S}}\in\mathbb{R}^{n\times n\times d}\) is the combined version. Note that our bases here serve similar purpose as those polynomial bases in the literature. But the way they are combined is learned rather than following certain recursions as in Chebyshev polynomials. We have three optional ways to leverage this design of new bases. (1) Shared filters and shared FFN. This model has the least parameters, where the basis \(\hat{\mathbf{S}}\) is shared across all graph convolutional layers. (2) Shared filters and layer-specific FFN, which compromises between parameters and performance, _e.g._, \(\hat{\mathbf{S}}^{(l)}=\text{FFN}^{(l)}([\mathbf{I}_{n}||\mathbf{S}_{1}||\cdots||\mathbf{S}_{M }])\) where the superscript \(l\) denotes the index of layer. (3) Layer-specific filters and layer-specific FFN. This model has the most parameters and each layer has its own encoder and decoder, _e.g._, \(\hat{\mathbf{S}}^{(l)}=\text{FFN}^{(l)}([\mathbf{I}_{n}||\mathbf{S}_{1}^{(l)}||\cdots||\mathbf{ S}_{M}^{(l)}])\). We refer these three models as Specformer-Small, Specformer-Medium, and Specformer-Large. ### Graph Convolution Finally, we assign each feature dimension a separate graph Laplacian matrix based on the learned basis \(\hat{\mathbf{S}}\), which can be written as follows: \[\hat{\mathbf{X}}^{(l-1)}_{:,i}=\hat{\mathbf{S}}_{:,:,i}\mathbf{X}^{(l-1)}_{:,i},\quad\mathbf{X} ^{(l)}=\sigma\left(\hat{\mathbf{X}}^{(l-1)}\mathbf{W}^{(l-1)}_{x}\right)+\mathbf{X}^{(l-1)}, \tag{6}\] where \(\mathbf{X}^{(l)}\) is the node representations in the \(l\)-th layer, \(\hat{\mathbf{X}}^{(l-1)}_{:,i}\) is the \(i\)-th channel dimension, \(\mathbf{W}^{(l-1)}_{x}\) is the transformation, and \(\sigma\) is the activation. The residual connection is an optional choice. By stacking multiple graph convolutional layers, Specformer can effectively learn node representations. ### Key Properties Compared to Related Models Specformer _v.s._ **Polynomial GNNs.** Specformer replaces the fixed bases of polynomials, _e.g._, \(\lambda,\lambda^{2},\cdots,\lambda^{k}\), with learnable bases, which has two major advantages: (1) Universality. Polynomial GNNs are the special cases of Specformer because the learnable bases can approximate any polynomials. (2) Flexibility. Polynomial GNNs are designed to learn a shared function for all eigenvalues whereas Specformer can learn eigenvalue-specific functions, thus being more flexible. Specformer _v.s._ **MPNNs.** MPNNs aggregate the local information from neighborhood one hop per layer. This localization capability enables MPNNs with high computational efficiency but weakens the ability in capturing the global information. Specformer is inherently non-local due to the use of (often dense) eigenvectors. The learned graph Laplacian \(\mathbf{U}G_{\theta}\mathbf{U}^{\top}=\theta_{1}\mathbf{u}_{1}\mathbf{u}_{1}^{\top}+\cdots+ \theta_{n}\mathbf{u}_{n}\mathbf{u}_{n}^{\top}\). Because \(\mathbf{u}_{i}\) is a eigenvector, \(\mathbf{u}_{i}\mathbf{u}_{i}^{\top}\) constructs a fully-connected graph. Therefore, our Specformer can break the localization of MPNNs and leverage global information. Specformer _v.s._ **Graph Transformers.** Graphformer (Ying et al., 2022) has shown that graph Transformer can perform well on graph-level tasks. However, existing graph Transformers do not show competitiveness in the node-level tasks, _e.g._, node classification. Recent studies (Bastos et al., 2022; Wang et al., 2022; Shi et al., 2022) provide some evidence for this phenomenon. They show that Transformer is essentially a low-pass filter. Therefore, graph Transformers cannot handle the complex node label distribution, _e.g._, homophilic and heterophilic. On the contrary, as we will see in the experiment section, Specformer can learn arbitrary bases for graph convolution and perform well on both node-level and graph-level tasks. Besides the above advantages, we show that our Specformer has the following theoretical properties. **Proposition 1**.: _Specformer is permutation equivariant._ **Proposition 2**.: _Specformer can approximate any univariate and multivariate continuous functions._ Proposition 1 shows that Specformer can learn permutation-equivariant node representations. Proposition 2 states that Specformer is more expressive than other graph filters. First, the ability to approximate any univariate functions generalizes existing scalar-to-scalar filters. Besides, Specformer can handle multiple eigenvalues and learn multivariate functions, so it can approximate a broader range of filter functions than scalar-to-scalar filters. All proofs are provided in Appendix D Complexity.Specformer has two parts of computation: spectral decomposition and forward process. Spectral decomposition is pre-computed and has the complexity of \(\mathcal{O}(n^{3})\). The forward complexity has three parts: Transformer, learnable bases, and graph convolution. Their corresponding complexities are \(\mathcal{O}(n^{2}d+nd^{2})\), \(\mathcal{O}(Mn^{2})\) and \(\mathcal{O}(nd)\), respectively, where \(n,M,L\) represent the number of nodes, filters, and layers, and \(d\) is the hidden dimension. The overall forward complexity is \(\mathcal{O}(n^{2}(d+M)+nd(L+d))\). The overall complexity of Specformer is the sum of the forward complexity and the decomposition complexity amortized over the number of uses in training and inference, rather than a simple summation of the two. See Appendix C.2 for more discussion. Scalability.When applying Specformer to large graphs, one can use the Sparse Generalized Eigenvalue (SGE) algorithms (Cai et al., 2021) to calculate \(q\) eigenvalues and eigenvectors, in which case the forward complexity will reduce to \((q^{2}(d+M)+nd(L+d))\). ## 5 Experiments In this section, we conduct experiments on a synthetic dataset and a wide range of real-world graph datasets to verify the effectiveness of our Specformer. ### Learning Spectral Filters on Synthetic Data Dataset description.We take 50 images with the resolution of 100\(\times\)100 from the Image Processing Toolbox 1. Each image is processed as a 2D regular 4-neighborhood grid graph, and the values of pixels are the node features. Therefore, these images share the same adjacency matrix \(\mathbf{A}\in\mathbb{R}^{10000\times 10000}\) and the \(m\)-th image has its graph signal \(\mathbf{x}_{m}\in\mathbb{R}^{1000\times 1}\). Five predefined graph filters are used to generate ground truth graph signals. For example, if we use the low-pass filter with \(G_{\theta}=\exp(-10\lambda^{2})\), the filtered graph signal is calculated by \(\tilde{\mathbf{x}}_{m}=\mathbf{U}\text{diag}[\exp(-10\lambda^{2}_{1}),\ldots,\exp(-10 \lambda^{2}_{n})]\mathbf{U}^{T}\mathbf{x}_{m}\). Footnote 1: [https://www2.mathworks.cn/products/image.html](https://www2.mathworks.cn/products/image.html) Setup.We choose six Spectral GNNs as baselines: GCN (Kipf and Welling, 2017), GAT (Velickovic et al., 2018), ChebyNet (Defferrard et al., 2016), GPR-GNN (Chien et al., 2021), BernNet (He et al., 2021), and JacobiConv (Wang and Zhang, 2022). Each method takes \(\mathbf{A}\) and \(\mathbf{x}_{m}\) as inputs, and tries to minimize the sum of squared error between the outputs \(\tilde{\mathbf{x}}_{m}\) and the pre-filtered graph signal \(\tilde{\mathbf{x}}_{m}\). We tune the number of hidden units to ensure that each method has nearly \(2\)K trainable parameters. The polynomial order is set to 10 for ChebyNet, GPR-GNN, and BernNet. For our model, we use Specformer-Small with 16 hidden units and 1 head. In training, the maximum number of epochs is set to 2000, and the model will be stopped early if the loss does not descend 200 epochs. All regularization tricks are removed. The learning rate is set to 0.01 for all models. We use two metrics to evaluate each method: sum of squared error and \(R^{2}\) score. Results.The quantitative experiment results are shown in Table 1, from which we can see that Specformer achieves the best performance on all synthetic graphs. Especially, it has more improvements on challenging graphs, such as Band-rejection and Comb. This validates the effectiveness of Specformer in learning complex graph filters. In addition, we can see that GCN and GAT only perform better on the homophilic graph, which reflects that only using low-frequency information is not enough. Polynomial-based GNNs, _i.e._, ChebyNet, GPR-GNN, BernNet, and JacobiConv, have more stable performances. But their expressiveness is still weaker than Specformer. We visualize the graph filters learned by GPR-GNN, BernNet, and Specformer in Figure 2, which further validates our claims. The horizontal axis presents the original eigenvalues, and the vertical axis indicates the corresponding new eigenvalues. For clarity, we uniformly downsample the eigenvalues at a ratio of 1:200 and only visualize three graphs because the situations of Low-pass and High-pass are similar. The same goes for Band-pass and Band-rejection. It can be seen that all methods can fit the easy filters well, _i.e._, High-pass. However, the polynomial-based GNNs cannot learn the narrow bands in Band-rejection and Comb, _e.g._, \(\lambda\in[0.75,1.25]\), which harms their performance. On the contrary, Specformer fits the ground truth precisely, reflecting the superior learning ability over polynomials. The spatial results, _i.e._, filtered images, can be seen in Appendix C.3. \begin{table} \begin{tabular}{l c c c c c} \hline \hline \multirow{2}{*}{**Model** (\(\mathbf{\sim}\)**2k param.**)} & \multirow{2}{*}{\(\exp(-10\lambda^{2})\)} & \multirow{2}{*}{\(1-\exp(-10\lambda^{2})\)} & \multirow{2}{*}{\(\exp(-10\lambda-1)^{2}\)} & \multirow{2}{*}{\(1-\exp(-10\lambda-1)^{2}\)} & \multirow{2}{*}{\(\left[\sin(\pi)\right]\)} \\ \cline{2-2} \cline{5-6} & & & & & \\ \hline GCN & 3.479(9872) & 67.663(2364) & 25.8755(1148) & 21.074(9438) & 50.5120(2977) \\ GAT & 2.3574(9905) & 21.9618(7529) & 14.4326(6823) & 12.6384(9652) & 23.1813(6957) \\ ChebyNet & 0.8220(9973) & 0.7867(9903) & 2.2722(9104) & 2.5296(9934) & 4.0735(9447) \\ GPR-GNN & 0.4169(9984) & 0.0943(9986) & 3.5121(8551) & 3.7917(9905) & 4.6549(9311) \\ BernNet & 0.0314(9999) & 0.0113(9999) & 0.0411(9984) & 0.9913(9973) & 0.9982(9868) \\ JacobiConv & 0.0003(9999) & 0.0064(9999) & 0.0213(9999) & 0.0156(9999) & 0.2933(9995) \\ Specformer & **0.0002**,**9999**) & **0.0026**,**9999**) & **0.0017**,**9**999**) & **0.0014**,**9999**) & **0.0057**,**9999**) \\ \hline \hline \end{tabular} \end{table} Table 1: Node regression results, mean of the sum of squared error & \(R^{2}\) score, on synthetic data. Figure 2: Illustrations of filters learned by two polynomial GNNs and Specformer. ### Node Classification **Datasets.** For the node classification task, we perform experiments on four homophilic datasets, _i.e_., Cora, Citeser, Amazon-Photo and ogbn-arXiv, and four heterophilic datasets, _i.e_., Chameleon, Squirrel, Actor and Penn94. Penn94 (Lim et al., 2021) and arXiv (Hu et al., 2020) are two large scale datasets. Other datasets, provided by (Rozemberczki et al., 2021; Pei et al., 2020), are commonly used to evaluate the performance of GNNs on heterophilic and homophilic graphs. **Baselines and settings.** We benchmark our model against a series of competitive baselines, including spatial GNNs, spectral GNNs, and graph Transformers. For all datasets, we use the full-supervised split, _i.e_., 60% for training, 20% for validation, and 20% for testing, as suggested in (He et al., 2021). All methods run 10 times and report the mean accuracy with a 95% confidence interval. For polynomial GNNs, we set the order of polynomials \(K\) = 10. For other methods, we use a 2-layer module. To ensure all models have similar numbers of parameters, in six small datasets, we set the hidden size \(d=64\) for spatial and spectral GNNs and \(d=32\) for graph Transformers and Specformer. The total numbers of parameters on Photo dataset are shown in Table 2. On two large datasets, we use truncated spectral decomposition to improve the scalability. Based on the filters learned on the small datasets, we find that band-rejection filters are important for heterophilic datasets and low-pass filters are suitable for homophilic datasets. See Figures 4(b) and 4(d). Therefore, we use eigenvectors with the smallest 3000 (low-frequency) and largest 3000 eigenvalues (high-frequency) for Penn94, and eigenvectors with the smallest 5000 eigenvalues (low-frequency) for arXiv. We use one layer for Specformer and set \(d=64\) in Penn94 and \(d=512\) in arXiv for all methods, as suggested by (Lim et al., 2021; He et al., 2022). More details, _e.g_., optimizers, can be found in Appendix A. **Results.** In Table 2, we can find that Specformer outperforms state-of-the-art baselines on 7 out of 8 datasets and achieves 12% relative improvement on the Squirrel dataset, which validates the superior learning ability of Specformer. In addition, the improvement is more pronounced on heterophilic datasets than on homophilic datasets. This is probably caused by the easier fitting of the low-pass filters in homophilic datasets. The same phenomenon is also observed in the synthetic graphs. An interesting observation is that the improvement on larger graphs, _e.g_., Actor and Photo, is less than that on smaller graphs. One possible reason is that the role of the self-attention mechanism is weakened, _i.e_., the attention values become uniform due to a large number of tokens. We notice that Specformer has a slightly higher variance than the baselines. This is because we set a large dropout rate to prevent overfitting. On the two large graph datasets, we can see that graph Transformers are memory-consuming due to the self-attention. On the contrary, Specformer reduces the time and space costs by using the truncated decomposition and shows better scalability than graph Transformers. The time and space overheads are listed in Appendix C.2. ### Graph Classification and Regression **Datasets.** We conduct experiments on three graph-level datasets with different scales. ZINC (Dwivedi et al., 2020) is a small subset of a large molecular dataset, which contains 12K graphs in \begin{table} \begin{tabular}{l|c|c c c c|c c c c} \hline & \multirow{2}{*}{**Param.**} & \multicolumn{4}{c|}{**Heterophilic**} & \multicolumn{4}{c}{**Homophilic**} \\ & & **Photo** & **Chameleon** & **Squirrel** & **Actor** & **Penn94** & **Cora** & **Citeseer** & **Photo** & **arXiv** \\ \hline \multicolumn{1}{c}{Spatial-based GNNs} & \multicolumn{4}{c}{Spatial-based GNNs} \\ \hline GCN & 48K & 95.61a2.21 & 46.7860.87 & 33.231a.16 & 82.4740.27 & 87.144.101 & 79.860.67 & 88.2660.73 & 71.7440.29 \\ GAT & 49K & 63.13a1.93 & 44.949.88 & 33.9342.47 & 81.534.50 & 88.0340.79 & 80.5240.71 & 90.9440.68 & 71.8220.23 \\ H\({}_{1}\)GCN & 60K & 57.11a1.58 & 36.422.189 & 35.861.03 & OOM & 86.922.137 & 77.071.64 & 93.026.91 & OOM \\ GCNII & 49K & 63.4440.85 & 41.961a.02 & 36.989.05 & 82.924.98 & 88.460.82 & 79.0740.65 & 89.9440.31 & 72.0410.19 \\ \hline \multicolumn{1}{c}{Spherical-based GNNs} & \multicolumn{4}{c}{Spherical-based GNNs} \\ \hline LanczosNet* & 50K & 64.81a1.54 & 48.644.17 & 38.1640.91 & 81.554.20 & 86.777.145 & 80.0514.65 & 93.2140.85 & 71.4640.39 \\ ChebyNet & 48K & 59.28a2.15 & 45.504.21 & 37.6140.89 & 81.090.38 & 86.7680.82 & 79.1140.75 & 93.7740.32 & 71.1240.22 \\ GPP-GAN & 48K & 67.28a1.90 & 50.151a1.92 & 39.294 & 67.183.180 & 85.7240.69 & 80.120.83 & 93.5840.28 & 71.7840.18 \\ BernNet & 48K & 68.29a1.58 & 51.354.73 & 41.794.101 & 82.4740.21 & 88.5240.95 & 80.0940.79 & 93.6330.35 & 71.9660.27 \\ ChebyNet1 & 48K & 71.37a1.50 & 57.722a.15 & 47.158.07 & 71.312.20 & 78.7140.93 & 85.340.79 & 94.9263.32 & 73.2240.23 \\ JacobiConv & 48K & 74.20a1.03 & 57.384.125 & 41.7740.64 & 83.350.11 & **88.9840.46** & 80.7840.79 & 95.4340.23 & 72.1440.17 \\ \hline \multicolumn{1}{c}{Graphic Transformers} & \multicolumn{4}{c}{Grapheme7} \\ \hline Transformer* & 37K & 46.39a19.71 & 39.903.163 & 39.959.164 & OOM & 71.733a1.68 & 70.554.12 & 90.0551.50 & OOM \\ Graphformer* & 139K & 54.498a1.31 & 36.569a1.75 & 38.454.138 & OOM & 67.717a0.78 & 73.041.21 & 85.204.12 & OOM \\ \hline \hline Specformer & 32K & **74.72a1.29** & **64.6440.81** & **41.9341.84** & **84.320.32** & 88.57a1.01 & **81.494.94** & **95.488.32** & **72.37a1.18** \\ \hline \hline \end{tabular} \end{table} Table 2: Results on real-world node classification tasks. Mean accuracy (%) \(\pm\) 95% confidence interval. * means re-implemented baselines. “OOM” means out of GPU memory. total. MolHIV and MolPCBA are taken from the Open Graph Benchmark (OGB) datasets (Hu et al., 2020). MolHIV is a medium dataset that has nearly 41K graphs. MolPCBA is the largest, containing 437K graphs. For all datasets, nodes represent the atoms, and edges indicate the bonds. Baselines and Settings.We choose popular MPNNs (GCN, GIN, and GatedGNN), graph Transformers with positional or structural embedding (SAN, Graphormer, and GPS), and other state-of-the-art GNNs (CIN, GIN-AK+, etc.) as the baselines of graph-level tasks. For the ZINC dataset, we tune the hyperparameters of Specformer to ensure that the total parameters are around 500K. Results.We apply Specformer-Small, Medium, and Large for ZINC, MolHIV, and MolPCBA, respectively. The results are shown in Table 3. It can be seen that Specformer outperforms the state-of-the-art models in ZINC and MolPCBA datasets, without using any hand-crafted features or pre-defined polynomials. This phenomenon proves that directly using neural networks to learn the graph spectrum is a promising way to construct powerful GNNs. ### Ablation Studies We perform ablation studies on two node-level datasets and one graph-level dataset to evaluate the effectiveness of each component. The results are shown in Table 4. The top three lines show the effect of the encoder, _i.e._, eigenvalue encoding (EE) and self-attention. It can be seen that EE is more important on Squirrel than on Citeseer. The reason is that the spectral filter of Squirrel is more difficult to learn. Therefore, the model needs the encoding to learn better representations. The attention module consistently improves performance by capturing the dependency among eigenvalues. The bottom three lines verify the performance of graph filters at different scales. We can see that in the easy task, _e.g._, Citeseer, the Small and Medium models have similar performance, but the Large model causes serious overfitting. In the hard task, _e.g._, Squirrel and MolPCBA, the Large model is slightly better than the Medium model but outperforms the Small model a lot, implying that adding the number of parameters can boost the performance. In summary, it is important to consider the difficulty of tasks when selecting models. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline \multicolumn{2}{c}{**Encoder**} & \multicolumn{2}{c}{**Decoder**} & \multicolumn{2}{c}{**Node-level**} & \multicolumn{2}{c}{**Graph-level**} \\ \hline \(\rho(\lambda)\) & **Attention** & **Small** & **Medium** & **Large** & **Squirrel** (\(\uparrow\)) & **Citeseer** (\(\uparrow\)) & **MolPCBA** (\(\uparrow\)) \\ \hline & & & ✓ & & 33.05 & 80.57 & 0.2696 \\ ✓ & & & ✓ & & 63.78 & 81.17 & 0.2933 \\ ✓ & ✓ & & ✓ & & 64.64 & 81.49 & 0.2970 \\ ✓ & ✓ & ✓ & & & 64.51 & 81.47 & 0.2912 \\ ✓ & ✓ & & & ✓ & 65.10 & 80.00 & 0.2972 \\ \hline \hline \end{tabular} \end{table} Table 4: Ablation studies on node-level and graph-level tasks. \begin{table} \begin{tabular}{l c c} \hline \hline **Model** & **ZINC(\(\downarrow\))** & **MolHIV(\(\uparrow\))** & **MolPCBA(\(\uparrow\))** \\ \hline GCN & \(0.367\pm 0.011\) & \(0.7599\pm 0.0119\) & \(0.2424\pm 0.0034\) \\ GIN & \(0.526\pm 0.051\) & \(0.7707\pm 0.0149\) & \(0.2703\pm 0.0023\) \\ GatedGCN & \(0.090\pm 0.001\) & - & \(0.267\pm 0.002\) \\ CIN & \(0.079\pm 0.006\) & \(\mathbf{0.8094\pm 0.0057}\) & - \\ GIN-AK+ & \(0.080\pm 0.001\) & \(0.7961\pm 0.0119\) & \(0.2930\pm 0.0044\) \\ GSN & \(0.101\pm 0.010\) & \(0.7799\pm 0.0100\) & - \\ DGN & \(0.168\pm 0.003\) & \(0.7970\pm 0.0097\) & \(0.2885\pm 0.0030\) \\ PNA & \(0.188\pm 0.004\) & \(0.7905\pm 0.0132\) & \(0.2838\pm 0.0035\) \\ Spec-GN & \(0.070\pm 0.002\) & - & \(0.2965\pm 0.0028\) \\ \hline SAN & \(0.139\pm 0.006\) & \(0.7785\pm 0.0025\) & \(0.2765\pm 0.0042\) \\ Graphormer2 & \(0.122\pm 0.006\) & \(0.7640\pm 0.0022\) & \(0.2643\pm 0.0017\) \\ GPS & \(0.070\pm 0.004\) & \(0.7880\pm 0.0101\) & \(0.2907\pm 0.0028\) \\ \hline Specformer & \(\mathbf{0.066\pm 0.003}\) & \(0.7889\pm 0.0124\) & \(\mathbf{0.2972\pm 0.0023}\) \\ \hline \hline \end{tabular} \end{table} Table 3: Results on graph-level datasets. \(\downarrow\) means lower the better, and \(\uparrow\) means higher the better. ### Visualizations We now investigate the dependency of eigenvalues and filters learned by Specformer. To make the dependency clearer, we first quantize the self-attention weight matrix by grouping the eigenvalues into three frequency bands: Low \(\in[0,\frac{2}{3})\), Medium \(\in[\frac{2}{3},\frac{4}{3})\), and High \(\in[\frac{4}{3},2]\). We then compute the quantized dependency. More details are given in Appendix B.2. The results are shown in Figures 3, 4, and 5, from which we have some interesting observations. (1) Similar dependency patterns can be learned on different graphs. In low-pass filtering, _e.g._, Citeseer and Low-pass, all frequency bands tend to use the low-frequency information. While, in the band-related filtering, _e.g._, Squirrel, Band-pass, and Band-rejection, the low- and high-frequency highly depend on the medium-frequency, and the situation of the medium-frequency is opposite. (2) The more difficult the task, the less obvious the dependency. On Comb and ZINC, the dependency of eigenvalues is inconspicuous. (3) On graph-level datasets, the decoder can learn different filters. Figure 5 shows two basic filters. It can be seen that \(\hat{\mathbf{S}}_{1}\) and \(\hat{\mathbf{S}}_{2}\) have different dependencies and patterns, which are different from node-level tasks, where only one filter is needed. The finding suggests that these graph-level tasks are more difficult than node-level tasks and are still challenging for spectral GNNs. ## 6 Conclusion In this paper, we propose Specformer that leverages Transformer to build a set-to-set spectral filter along with learnable bases. Specformer effectively captures magnitudes and relative dependencies of the eigenvalues in a permutation-equivariant fashion and can perform non-local graph convolution. Experiments on synthetic and real-world datasets demonstrate that Specformer outperforms various GNNs and learns meaningful spectrum patterns. A promising future direction is to improve the efficiency of Specformer through sparsifying the self-attention matrix of Transformer. Figure 4: The dependency and learned filters of heterophilic and homophilic datasets. Figure 5: The dependency and basic filters of ZINC dataset. Figure 3: The dependency of eigenvalues on synthetic graphs. #### Acknowledgments This work is supported in part by the National Natural Science Foundation of China (No. U20B2045, 62192784, 62172052, 62002029, 62172052, U1936014), BUPT Excellent Ph.D. Students Foundation (No. CX2022310), the NSERC Discovery Grants (No. RGPIN-2019-05448, No. RGPIN-2022-04636), and the NSERC Collaborative Research and Development Grant (No. CRDPJ 543676-19). Resources used in preparing this research were provided, in part, by Advanced Research Computing at the University of British Columbia, the Oracle for Research program, and Compute Canada.
2307.07810
Graph Automorphism Group Equivariant Neural Networks
Permutation equivariant neural networks are typically used to learn from data that lives on a graph. However, for any graph $G$ that has $n$ vertices, using the symmetric group $S_n$ as its group of symmetries does not take into account the relations that exist between the vertices. Given that the actual group of symmetries is the automorphism group Aut$(G)$, we show how to construct neural networks that are equivariant to Aut$(G)$ by obtaining a full characterisation of the learnable, linear, Aut$(G)$-equivariant functions between layers that are some tensor power of $\mathbb{R}^{n}$. In particular, we find a spanning set of matrices for these layer functions in the standard basis of $\mathbb{R}^{n}$. This result has important consequences for learning from data whose group of symmetries is a finite group because a theorem by Frucht (1938) showed that any finite group is isomorphic to the automorphism group of a graph.
Edward Pearce-Crump, William J. Knottenbelt
2023-07-15T14:19:42Z
http://arxiv.org/abs/2307.07810v2
# Graph Automorphism Group Equivariant Neural Networks ###### Abstract For any graph \(G\) having \(n\) vertices and its automorphism group \(\mathrm{Aut}(G)\), we provide a full characterisation of all of the possible \(\mathrm{Aut}(G)\)-equivariant neural networks whose layers are some tensor power of \(\mathbb{R}^{n}\). In particular, we find a spanning set of matrices for the learnable, linear, \(\mathrm{Aut}(G)\)-equivariant layer functions between such tensor power spaces in the standard basis of \(\mathbb{R}^{n}\). Machine Learning, ICML ## 1 Introduction The learnable, linear, group equivariant layer functions between tensor power spaces of \(\mathbb{R}^{n}\) have recently been characterised for a number of important groups by (Pearce-Crump, 2022; 2023; 2023). This characterisation has been shown to be related to the combinatorics of set partitions. A set partition of a given set of elements is a partitioning of the set into a number of disjoint, non-empty subsets. The subsets are called blocks. Depending on the group and the structure of the blocks, we are able to determine the form of the learnable, linear, group equivariant layer functions between any two tensor power spaces of \(\mathbb{R}^{n}\) precisely. In this paper, we look instead at how to construct neural networks that are equivariant to the automorphism group of a graph \(G\) having some \(n\) vertices, written \(\mathrm{Aut}(G)\). Specifically, we give a full characterisation of all of the possible \(\mathrm{Aut}(G)\)-equivariant neural networks whose layers are some tensor power of \(\mathbb{R}^{n}\) by finding a spanning set of matrices for the learnable, linear, \(\mathrm{Aut}(G)\)-equivariant layer functions between such tensor power spaces in the standard basis of \(\mathbb{R}^{n}\). Similar to the above mentioned approach of Pearce-Crump, instead of calculating the spanning set by studying set partitions, we obtain it by relating each spanning set element to a so-called _bilabelled graph_. In brief, a \((k,l)\)-bilabelled graph is a graph that comes with two tuples, one of length \(k\) and the other of length \(l\), whose entries are taken from the vertex set of the graph (with repetitions amongst entries allowed). Consequently, by looking at the combinatorics of bilabelled graphs, we can determine the learnable, linear \(\mathrm{Aut}(G)\)-equivariant layers in such a neural network. We follow closely the work of (Mancinska & Roberson, 2020), who studied the problem of determining under what circumstances two graphs are quantum isomorphic. They built upon the work of (Chassaniol, 2019), who showed that a vertex-transitive graph has no quantum symmetries. We show how their results and methods can be applied instead for the purpose of learning from data that has an underlying symmetry to the automorphism group of a graph. The obvious application of our approach is for learning from graphs themselves. There are a number of methods that currently exist in the machine learning literature for learning from graphs. These include Graph Attention Networks (GATs) (Velickovic et al., 2018), Graph Convolutional Networks (GCNs) (Kipf & Welling, 2017), and Graph Neural Networks (GNNs) (Gori et al., 2005; Scarselli et al., 2009), among others. Many of these approaches use some form of \(S_{n}\)-equivariance, since this has been determined previously by a number of authors (Maron et al., 2019; Ravanbakhsh, 2020; Pearce-Crump, 2022). However, the stronger condition of \(\mathrm{Aut}(G)\)-equivariance is more appropriate for learning from data on graphs since it takes into account relations between vertices in the graph itself, whereas \(S_{n}\)-equivariance does not. Consequently, we believe that the approach presented in this paper provides a different way for learning from graphs, where, specifically, the equivariance is guaranteed exactly, and the form of the possible architectures is determined precisely by the symmetries in the underlying structure of the graph. There are other important motivations for performing such a characterisation. A famous theorem in algebraic graph theory known as Frucht's Theorem (Frucht, 1938) states that every finite group is isomorphic to the automorphism group of a finite undirected graph. Consequently, for any finite group of our choosing, if we know the graph (having some \(n\) vertices) whose automorphism group is isomorphic to the group in question, then we will be able to characterise all of the learnable, linear, group equivariant layer functions between any tensor power spaces of \(\mathbb{R}^{n}\) for that group. In particular, we show that we can recover the diagram basis that appears in (Godfrey et al., 2023) for the learnable, linear, \(S_{n}\)-equivariant layer functions between tensor power spaces of \(\mathbb{R}^{n}\) in the standard basis of \(\mathbb{R}^{n}\). We also show that we can determine characterisations for other groups, such as for \(D_{4}\), considered as a subgroup of \(S_{4}\), which, to the best of our knowledge, have been missing from the literature. Furthermore, even if, for a given finite undirected graph, we are unable to determine the finite group that is isomorphic to the automorphism group of the graph, we know that we can calculate the learnable, linear, automorphism group equivariant linear layers using this method, which will be sufficient for performing learning in situations where we do not need to know what the actual automorphism group is explicitly. The main contributions of this paper, which appear in Section 6 onwards, are as follows: 1. We are the first to show how the combinatorics underlying bilabelled graph diagrams serves as the theoretical foundation for constructing neural networks that are equivariant to the automorphism group of a graph \(G\) having \(n\) vertices when the layers are some tensor power of \(\mathbb{R}^{n}\). 2. In particular, we find a spanning set for the learnable, linear, \(\operatorname{Aut}(G)\)-equivariant layer functions between such tensor power spaces in the standard basis of \(\mathbb{R}^{n}\). 3. We show how our approach can be used to recover the diagram basis for the learnable, linear, \(S_{n}\)-equivariant layer functions between tensor power spaces of \(\mathbb{R}^{n}\) in the standard basis of \(\mathbb{R}^{n}\) that appears in (Godfrey et al., 2023). ## 2 Preliminaries We choose our field of scalars to be \(\mathbb{R}\) throughout. Tensor products are also taken over \(\mathbb{R}\), unless otherwise stated. Also, we let \([n]\) represent the set \(\{1,\ldots,n\}\), and we denote the standard basis of \(\mathbb{R}^{n}\) throughout by \(\{e_{i}\ |\ i\in[n]\}\). Recall that a representation of a group \(G\) is a choice of vector space \(V\) over \(\mathbb{R}\) and a group homomorphism \[\rho:G\to GL(V) \tag{1}\] We choose to focus on finite-dimensional vector spaces \(V\) that are some tensor power of \(\mathbb{R}^{n}\) in this paper. We often abuse our terminology by calling \(V\) a representation of \(G\), even though the representation is technically the homomorphism \(\rho\). When the homomorphism \(\rho\) needs to be emphasised alongside its vector space \(V\), we will use the notation \((V,\rho)\). ## 3 Group Equivariant Neural Networks Group equivariant neural networks are constructed by alternately composing linear and non-linear \(G\)-equivariant maps between representations of a group \(G\). The following is based on the material presented in (Lim and Nelson, 2022). We first define \(G\)-_equivariance_: **Definition 3.1**.: Suppose that \((V,\rho_{V})\) and \((W,\rho_{W})\) are two representations of a group \(G\). A map \(\phi:V\to W\) is said to be \(G\)-equivariant if, for all \(g\in G\) and \(v\in V\), \[\phi(\rho_{V}(g)[v])=\rho_{W}(g)[\phi(v)] \tag{2}\] The set of all _linear_\(G\)-equivariant maps between \(V\) and \(W\) is denoted by \(\operatorname{Hom}_{G}(V,W)\). When \(V=W\), we write this set as \(\operatorname{End}_{G}(V)\). It can be shown that \(\operatorname{Hom}_{G}(V,W)\) is a vector space over \(\mathbb{R}\). See (Segal, 2014) for more details. A special case of \(G\)-equivariance is \(G\)-_invariance_: **Definition 3.2**.: The map \(\phi\) given in Definition 3.1 is said to be \(G\)-invariant if \(\rho_{W}\) is defined to be the \(1\)-dimensional trivial representation of \(G\). As a result, \(W=\mathbb{R}\). We can now define the type of neural network that is the focus of this paper: **Definition 3.3**.: An \(L\)-layer \(G\)-equivariant neural network \(f_{NN}\) is a composition of _layer functions_ \[f_{NN}\coloneqq f_{L}\circ\ldots\circ f_{l}\circ\ldots\circ f_{1} \tag{3}\] such that the \(l^{\text{th}}\) layer function is a map of representations of \(G\) \[f_{l}:(V_{l-1},\rho_{l-1})\rightarrow(V_{l},\rho_{l}) \tag{4}\] that is itself a composition \[f_{l}\coloneqq\sigma_{l}\circ\phi_{l} \tag{5}\] of a learnable, linear, \(G\)-equivariant function \[\phi_{l}:(V_{l-1},\rho_{l-1})\rightarrow(V_{l},\rho_{l}) \tag{6}\] together with a fixed, non-linear activation function \[\sigma_{l}:(V_{l},\rho_{l})\rightarrow(V_{l},\rho_{l}) \tag{7}\] such that 1. \(\sigma_{l}\) is a \(G\)-equivariant map, as in (2), and 2. \(\sigma_{l}\) acts pointwise (after a basis has been chosen for each copy of \(V_{l}\) in \(\sigma_{l}\).) We focus on the learnable, linear, \(G\)-equivariant functions in this paper, since the non-linear functions are fixed. Note that, given a spanning set of matrices for \(\operatorname{Hom}_{G}(V_{l-1},V_{l})\), having picked a basis for each layer space in the set \(\{V_{l}\}\), the weight matrix is a weighted linear combination of the spanning set matrices, where each coefficient in the linear combination is a parameter to be learned. Consequently, the number of parameters appearing in the \(l^{\text{th}}\) layer function is equal to the number of matrices that appear in the spanning set for \(\operatorname{Hom}_{G}(V_{l-1},V_{l})\). _Remark 3.4_.: The entire neural network \(f_{NN}\) is itself a \(G\)-equivariant function because it can be shown that the composition of any number of \(G\)-equivariant functions is itself \(G\)-equivariant. _Remark 3.5_.: One way of making a neural network of the form given in Definition 3.3\(G\)-invariant is by choosing the representation in the final layer to be the \(1\)-dimensional trivial representation of \(G\). ## 4 Graph Theory Essentials We begin by recalling some of the fundamentals of graph theory that will appear throughout the rest of this paper. For more details, see any standard book on graph theory, such as (Bollobas, 1998). **Definition 4.1**.: A **graph**\(G\) is a tuple \((V(G),E(G))\) of sets, where \(V(G)\) is a set of vertices for \(G\) and \(E(G)\) is a subset of unordered pairs of elements from \(V(G)\times V(G)\) denoting the undirected edges between the vertices of \(G\). We include the possibility that the graph has loops; however, we only allow at most one loop per vertex. **Definition 4.2**.: Let \(G\) be a graph having \(n\) vertices. The **adjacency matrix** of \(G\), denoted by \(A_{G}\), is the \(n\times n\) matrix whose \((i,j)\)-entry is \(1\) if vertex \(i\) is adjacent to vertex \(j\) in \(G\), and is \(0\) otherwise. Note that \(A_{G}\) is a symmetric matrix, because \(G\) has undirected edges, and the \((i,i)\)-entry is \(1\) in \(A_{G}\) if and only if \(G\) has a loop at vertex \(i\). **Definition 4.3**.: Let \(G\) be a graph. The **complement** of \(G\), denoted by \(\overline{G}\), is the graph having the same vertex set as \(G\) and the same loops as \(G\), but now distinct vertices are adjacent in \(\overline{G}\) if and only if they are not adjacent in \(G\). _Example 4.4_.: The complete graph on \(n\) vertices, \(K_{n}\), is the loopless graph where every vertex is adjacent to every other vertex. _Example 4.5_.: The cycle graph on \(n\) vertices, \(C_{n}\), is the loopless graph where every vertex \(i\in[n]\) is adjacent to \(j=i\pm 1\mod n\), where \(j\in[n]\). **Definition 4.6**.: Let \(H\) and \(G\) be graphs. A **graph homomorphism** from \(H\) to \(G\) is a function \(\phi:V(H)\to V(G)\) such that if \(i\) is adjacent to \(j\) in \(H\), then \(\phi(i)\) is adjacent to \(\phi(j)\) in \(G\). **Definition 4.7**.: Let \(H\) and \(G\) be graphs. A **graph isomorphism** from \(H\) to \(G\) is a bijection \(\phi:V(H)\to V(G)\) such that \(i\) is adjacent to \(j\) in \(H\) if and only if \(\phi(i)\) is adjacent to \(\phi(j)\) in \(G\). Consequently, we get that **Definition 4.8**.: Let \(G\) be a graph. An **automorphism** of \(G\) is an isomorphism from \(G\) to \(G\). The set of all automorphisms of \(G\), written \(\operatorname{Aut}(G)\), can be shown to be a group. _Remark 4.9_.: If \(G\) is a graph having \(n\) vertices, then \(\operatorname{Aut}(G)\) is, in fact, a subgroup of \(S_{n}\). Specifically, if \(\sigma\in S_{n}\), then it is easy to show, viewing \(S_{n}\) as a subgroup of \(GL(n)\), that \[\sigma\in\operatorname{Aut}(G)\iff\sigma A_{G}=A_{G}\sigma \tag{8}\] It is also clear to see that \(\operatorname{Aut}(G)\cong\operatorname{Aut}(\overline{G})\). _Example 4.10_.: The automorphism group of the complete graph on \(n\) vertices, \(\operatorname{Aut}(K_{n})\), is the symmetric group \(S_{n}\). Consequently, the automorphism group of the edgeless graph having \(n\) vertices, \(\operatorname{Aut}(\overline{K_{n}})\), is also the symmetric group \(S_{n}\). _Example 4.11_.: The automorphism group of the cycle graph on \(n\) vertices, \(\operatorname{Aut}(C_{n})\), is isomorphic to the dihedral group \(D_{n}\) of order \(2n\). _Example 4.12_.: The automorphism group of two copies of the complete graph on two vertices, \(\operatorname{Aut}(2K_{2})\), is isomorphic to the dihedral group \(D_{4}\) of order \(8\). ## 5 The representation \((\mathbb{R}^{n})^{\otimes k}\) of \(\operatorname{Aut}(G)\) Recall that any \(k\)-tensor power of \(\mathbb{R}^{n}\), \((\mathbb{R}^{n})^{\otimes k}\), for any \(k\in\mathbb{Z}_{\geq 0}\), is a representation of \(S_{n}\), since the elements \[e_{I}\coloneqq e_{i_{1}}\otimes e_{i_{2}}\otimes\cdots\otimes e_{i_{k}} \tag{9}\] for all \(I\coloneqq(i_{1},i_{2},\ldots,i_{k})\in[n]^{k}\) form a basis of \((\mathbb{R}^{n})^{\otimes k}\), and the action of \(S_{n}\) that maps a basis element of \((\mathbb{R}^{n})^{\otimes k}\) of the form (9) to \[e_{\sigma(I)}\coloneqq e_{\sigma(i_{1})}\otimes e_{\sigma(i_{2})}\otimes\cdots \otimes e_{\sigma(i_{k})} \tag{10}\] can be extended linearly on the basis. For any graph \(G\) having \(n\) vertices, as \(\operatorname{Aut}(G)\) is a subgroup of \(S_{n}\), we have that \((\mathbb{R}^{n})^{\otimes k}\) is also a representation of \(\operatorname{Aut}(G)\) that is given by the restriction of the representation of \(S_{n}\) to \(\operatorname{Aut}(G)\). We denote the representation of \(S_{n}\) by \(\rho_{k}\). We will use the same notation for the restriction of this representation to \(\operatorname{Aut}(G)\), with the context making clear that it is the restriction of the \(S_{n}\) representation. For more on the representation theory of the symmetric group, see (Sagan, 2000) and (Ceccherini-Silberstein et al., 2010). ## 6 Characterisation of the Equivariant, Linear Maps for the Automorphism Group of a Graph In this section, we give a full characterisation of all of the possible \(\operatorname{Aut}(G)\)-equivariant neural networks whose layers are some tensor power of \(\mathbb{R}^{n}\) by finding a spanning set of matrices for the learnable, linear, \(\operatorname{Aut}(G)\)-equivariant layer functions between such tensor power spaces in the standard basis of \(\mathbb{R}^{n}\). We follow closely the work of Mancinska and Roberson (2020) throughout, but begin by stating a result found by Chassaniol (2019) which describes, in terms of a generating set of matrices, the category whose morphisms are the linear layer functions that we wish to characterise. ### Chassaniol's Result For each group \(G\) that is a subgroup of \(S_{n}\), we can define the following category. **Definition 6.1**.: The category \(\mathcal{C}(G)\) consists of objects that are the \(k\)-order tensor power spaces of \(\mathbb{R}^{n}\), as representations of \(G\), and morphism spaces between any two objects that are the vector spaces \(\operatorname{Hom}_{G}((\mathbb{R}^{n})^{\otimes k},(\mathbb{R}^{n})^{\otimes l})\). The vertical composition of morphisms is given by the usual composition of linear maps, the tensor product is given by the usual tensor product of linear maps, and the unit object is the one-dimensional trivial representation of \(G\). _Remark 6.2_.: We will sometimes write \(\mathcal{C}(G)(k,l)\) for \(\operatorname{Hom}_{G}((\mathbb{R}^{n})^{\otimes k},(\mathbb{R}^{n})^{ \otimes l})\), and reuse the notation \(\mathcal{C}(G)\) for the set \(\cup_{k,l=0}^{\infty}\mathcal{C}(G)(k,l)\). **Proposition 6.3**.: _The category \(\mathcal{C}(G)\) is a symmetric tensor category with duals, in the sense that_ 1. _if_ \(\phi_{1}\)_,_ \(\phi_{2}\in\mathcal{C}(G)\)_, then_ \(\phi_{1}\otimes\phi_{2}\in\mathcal{C}(G)\)__ 2. _if_ \(\phi_{1}\)_,_ \(\phi_{2}\in\mathcal{C}(G)\) _are composable, then_ \(\phi_{1}\circ\phi_{2}\in\mathcal{C}(G)\)__ 3. _if_ \(\phi\in\mathcal{C}(G)(k,l)\)_, then_ \(\phi^{*}\in\mathcal{C}(G)(l,k)\)__ 4. _the identity map_ \(M^{1,1}\in\mathcal{C}(G)(1,1)\)__ 5. _the swap map_ \(S\in\mathcal{C}(G)(2,2)\)_, where_ \(S\) _maps_ \(e_{i}\otimes e_{j}\) _to_ \(e_{j}\otimes e_{i}\)__ 6. _the cup map_ \(M^{0,2}\in\mathcal{C}(G)(0,2)\)_, where_ \(M^{0,2}\) _maps_ \(1\) _to_ \(\sum_{i}e_{i}\otimes e_{i}\)__ Proof.: See Banica and Speicher (2009), Proposition 1.2. It will be useful to define a number of _spider_ maps \(M^{k,l}\). **Definition 6.4**.: For all \(k,l\geq 0\), the map \(M^{k,l}\in\operatorname{Hom}((\mathbb{R}^{n})^{\otimes k},(\mathbb{R}^{n})^{ \otimes l})\) is defined as follows: * if \(k,l>0\), then \(M^{k,l}\) maps \(e_{i}^{\otimes k}\) to \(e_{i}^{\otimes l}\) for all \(i\in[n]\) and maps all other vectors to the zero vector otherwise. * if \(k=0\) and \(l>0\), then \(M^{0,l}\) maps \(1\) to \(\sum_{i}e_{i}^{\otimes l}\). * if \(k>0\) and \(l=0\), then \(M^{k,0}\) maps \(e_{i}^{\otimes k}\) to \(1\) for all \(i\in[n]\) and maps all other vectors to \(0\) otherwise. * if \(k=0\) and \(l=0\), then \(M^{0,0}\coloneqq(n)\). _Remark 6.5_.: In particular, the identity map \(M^{1,1}\) and the cup map \(M^{0,2}\) given in Proposition 6.3 are spider maps, hence the notation used is consistent. For any graph \(G\) having \(n\) vertices, Chassaniol (2019) found the following generating set for the category \(\mathcal{C}(\operatorname{Aut}(G))\): **Theorem 6.6**Chassaniol (2019), Proposition 3.5.: _Let \(A_{G}\) denote the adjacency matrix of the graph \(G\). Then_ \[\mathcal{C}(\operatorname{Aut}(G))=\langle M^{0,1},M^{2,1},A_{G},S\rangle_{+, \circ,\otimes,*} \tag{11}\] _where the right hand side denotes all matrices that can be generated from the four matrices using the operations of \(\mathbb{R}\)-linear combinations, matrix product, Kronecker product, and transposition._ ### The Category of All Bilabelled Graphs Mancinska and Roberson showed how Chassaniol's generating set for \(\mathcal{C}(\operatorname{Aut}(G))\) can be improved by relating it to the combinatorics of bilabelled graphs. **Definition 6.7**Bilabelled Graph.: A \((k,l)\)-bilabelled graph \(\mathbf{H}\) is a triple \((H,\mathbf{k},\mathbf{l})\), where \(H\) is a graph having some \(m\) vertices with labels in \([m]\), \(\mathbf{k}\coloneqq(k_{1},\dots,k_{k})\) is a tuple in \([m]^{k}\), and \(\mathbf{l}\coloneqq(l_{1},\dots,l_{l})\) is a tuple in \([m]^{l}\). We call \(\mathbf{k}\) and \(\mathbf{l}\) the input and output tuples to \(\mathbf{H}\) respectively, and we call \(H\) the underlying graph of \(\mathbf{H}\). A vertex in the underlying graph \(H\) of \(\mathbf{H}\) is said to be free if it does not appear in either of the input or output tuples. In order to provide a diagrammatic representation of bilabelled graphs, we need the following definition. **Definition 6.8**.: Let \(\mathbf{H_{1}}=(H_{1},\mathbf{k},\mathbf{l})\) be a \((k,l)\)-bilabelled graph and let \(\mathbf{H_{2}}=(H_{2},\mathbf{k^{\prime}},\mathbf{l^{\prime}})\) be another \((k,l)\)-bilabelled graph. Then \(\mathbf{H_{1}}\) and \(\mathbf{H_{2}}\) are isomorphic as \((k,l)\)-bilabelled graphs, written \(\mathbf{H_{1}}\cong\mathbf{H_{2}}\), if there is a graph isomorphism from \(H_{1}\) to \(H_{2}\) such that \(k_{i}\mapsto k_{i}^{\prime}\) for all \(i\in[k]\) and \(l_{j}\mapsto l_{j}^{\prime}\) for all \(j\in[l]\). We denote \(\mathbf{H_{1}}\)'s isomorphism class by \([\mathbf{H_{1}}]\). _Remark 6.9_.: An isomorphism between two \((k,l)\)-bilabelled graphs can effectively be thought of as a relabelling of the vertices of the _same_ underlying graph, resulting in an appropriate relabelling of the elements of the tuples themselves. Consequently, the isomorphism can be thought of as a relabelling of the same bilabelled graph! With this in mind, we can represent the _isomorphism class_\([\mathbf{H}]\) for a \((k,l)\)-bilabelled graph \(\mathbf{H}=(H,\mathbf{k},\mathbf{l})\) in diagrammatic form. We choose \(\mathbf{H}\) as a class representative of \([\mathbf{H}]\), and proceed as follows: we draw * \(l\) black vertices on the top row, labelled left to right by \(1,\dots,l\), * \(k\) black vertices on the bottom row, labelled left to right by \(l+1,\dots,l+k\), * the underlying graph \(H\) in red, with labelled red vertices and red edges, in between the two rows of black vertices, and * a black line connecting vertex \(i\) in the top row to \(l_{i}\), and a black line connecting vertex \(l+j\) in the bottom row to \(k_{j}\). Note that if we drew a diagram for another representative of the same isomorphism class \([\mathbf{H}]\), by the definition of an isomorphism for \((k,l)\)-bilabelled graphs, we would obtain exactly the same diagram as for the first class representative, except the labels of the underlying graph's vertices (and consequently the elements of the tuples) would be different. As a result, the diagram for the isomorphism class \([\mathbf{H}]\) is independent of the choice of class representative, and so we choose the class' label, \(\mathbf{H}\), to draw the diagram throughout, unless otherwise stated. With the technicalities being understood, we refer to the construction defined above as a \((k,l)\)-bilabelled graph diagram _for \(\mathbf{H}\) itself_, and use the same notation \(\mathbf{H}\). We give an example of a \((2,3)\)-bilabelled graph diagram in Figure 1. _Remark 6.10_.: We have chosen to represent \((k,l)\)-bilabelled graph diagrams from bottom to top, in contrast to Mancinska and Roberson, who chose a right to left orientation, in order to be consistent with the direction of set partition diagrams that appear in the author's other papers (Pearce-Crump, 2022; 2023a;b). We can define a number of operations on isomorphism classes of bilabelled graphs, namely, composition, tensor product, and involution, as follows. **Definition 6.11** (Composition of Bilabelled Graphs).: Let \([\mathbf{H_{1}}]\) be the isomorphism class of the \((k,l)\)-bilabelled graph \(\mathbf{H_{1}}=(H_{1},\mathbf{k},\mathbf{l})\), and let \([\mathbf{H_{2}}]\) be the isomorphism class of the \((k,l)\)-bilabelled graph \(\mathbf{H_{2}}=(H_{2},\mathbf{l}^{\prime},\mathbf{m})\). Then we define the composition \([\mathbf{H_{2}}]\circ[\mathbf{H_{1}}]\) to be the isomorphism class \([\mathbf{H}]\) of the \((k,m)\)-bilabelled graph \(\mathbf{H}=(H,\mathbf{k},\mathbf{m})\), that is obtained as follows: drawing each isomorphism class as a diagram, we first relabel the red vertices in \(H_{2}\) (and consequently the elements of \(\mathbf{l}^{\prime},\mathbf{m}\)) under the map \(i\mapsto i^{\prime}\), for all \(i\in[V(H_{2})]\). Then, we connect the top row of black vertices of \(\mathbf{H_{1}}\) with the bottom row of black vertices of \(\mathbf{H_{2}}\), and delete the vertices themselves. We are now left with \(l\) black lines that are edges between red vertices of the underlying graphs \(H_{1}\) and \(H_{2}\). Next, we contract these, forming set unions of the vertex labels where appropriate, and remove any red multiedges between red vertices that appear in the contraction to obtain the new underlying graph \(H\). Note that we keep any loops that appear in the new underlying graph \(H\). Finally, we relabel the vertex set of the new underlying graph \(H\) so that each vertex is labelled by an integer only, and consequently relabel the entries of the tuples \(\mathbf{k}\) and \(\mathbf{m}\) accordingly. Since this operation has been defined on diagrams, it means that the operation is well defined on the isomorphism classes themselves. We give an example of this composition in Figure 2. _Remark 6.12_.: Note that to compose two isomorphism classes of bilabelled graphs, only the number of bottom row vertices in the diagram for \(\mathbf{H_{2}}\) needs to be equal to the number of top row vertices in the diagram for \(\mathbf{H_{1}}\). In particular, the number of vertices in the underlying graphs of \(\mathbf{H_{1}}\) and \(\mathbf{H_{2}}\) do _not_ need to be the same. **Definition 6.13** (Tensor Product of Bilabelled Graphs).: Let \([\mathbf{H_{1}}]\) be the isomorphism class of the \((k,l)\)-bilabelled graph \(\mathbf{H_{1}}=(H_{1},\mathbf{k},\mathbf{l})\) and let \([\mathbf{H_{2}}]\) be the isomorphism class of the \((q,m)\)-bilabelled graph \(\mathbf{H_{2}}=(H_{2},\mathbf{q},\mathbf{m})\). Then we define the tensor product \([\mathbf{H_{1}}]\otimes[\mathbf{H_{2}}]\) to be the isomorphism class of the \((k+q,l+m)\)-bilabelled graph \((H_{1}\cup H_{2},\mathbf{k}\mathbf{q},\mathbf{l}\mathbf{m})\) where \(\mathbf{k}\mathbf{q}\) is the \((k+q)\)-length tuple obtained by concatenating \(\mathbf{k}\) and \(\mathbf{q}\), and likewise for \(\mathbf{l}\mathbf{m}\). **Definition 6.14** (Involution of Bilabelled Graphs).: Let \([\mathbf{H}]\) be the isomorphism class of the \((k,l)\)-bilabelled graph \(\mathbf{H}=(H,\mathbf{k},\mathbf{l})\). Then we define the involution \([\mathbf{H^{*}}]\) to be the isomorphism class of the \((l,k)\)-bilabelled graph \((H,\mathbf{l},\mathbf{k})\). We can form a category for the bilabelled graphs, as follows: **Definition 6.15** (Category of Bilabelled Graphs).: The category of all bilabelled graphs \(\mathcal{G}\) is the category whose objects are the non-negative integers \(\mathbb{N}_{\geq 0}=\{0,1,2,\dots\}\), and, for any pair of objects \(k\) and \(l\), the morphism space \(\mathcal{G}(k,l)\coloneqq\operatorname{Hom}_{\mathcal{G}}(k,l)\) is defined to be the \(\mathbb{R}\)-linear span of the set of all isomorphism classes of \((k,l)\)-bilabelled graphs. The vertical composition of morphisms is the composition of isomorphism classes of bilabelled graphs given in Definition 6.11, the tensor product of morphisms is the tensor product of isomorphism classes of bilabelled graphs given in Definition 6.13, and the unit object is \(0\). _Remark 6.16_.: It is easy to show that \(\mathcal{G}\) is a strict \(\mathbb{R}\)-linear monoidal category. ### \(G\)-Homomorphism Matrices Mancinska and Roberson established a relationship between the abstract and the concrete: namely, between isomorphism classes of \((k,l)\)-bilabelled graphs and, for a fixed graph \(G\) having \(n\) vertices, matrices that are linear maps \((\mathbb{R}^{n})^{\otimes k}\rightarrow(\mathbb{R}^{n})^{\otimes l}\), which they termed \(G\)-homomorphism matrices. We express this relationship more formally in terms of functors and categories at the end of this section. **Definition 6.17** (\(G\)-Homomorphism Matrix).: Suppose that \(G\) is a graph having \(n\) vertices, and let \([\mathbf{H}]\) be the isomorphism class of the \((k,l)\)-bilabelled graph \(\mathbf{H}\coloneqq(H,\mathbf{k},\mathbf{l})\). Then we define the \(G\)-homomorphism matrix of \([\mathbf{H}]\) to be the \(n^{l}\times n^{k}\) matrix that has \((I,J)\) entry given by the number of graph homomorphisms from \(H\) to \(G\) such that \(\mathbf{l}\) is mapped to \(I\) and \(\mathbf{k}\) is mapped to \(J\). We denote this matrix by \(X^{G}_{\mathbf{H}}\). _Remark 6.18_.: Note that the \(G\)-homomorphism matrix \(X^{G}_{\mathbf{H}}\) obtained is independent of the choice of class representative for \([\mathbf{H}]\), and that any such matrix must have only real entries, by definition. We now describe some important examples of \(G\)-homomorphism matrices, where throughout, \(G\) is a graph having \(n\) vertices. _Example 6.19_.: If \(\mathbf{A}\) is the \((1,1)\)-bilabelled graph \((K_{2},(1),(2))\), where \(K_{2}\) is the complete graph on two vertices, then \(X^{G}_{\mathbf{A}}\) is the adjacency matrix \(A_{G}\) of \(G\). _Example 6.20_.: If \(\mathbf{M}^{k,l}\) is the \((k,l)\)-bilabelled graph \((K_{1},\mathbf{k}=(1,\ldots,1),\mathbf{l}=(1,\ldots,1))\), where \(K_{1}\) is the complete graph on one vertex, then \(X^{G}_{\mathbf{M}^{k,l}}\) is the spider matrix \(M^{k,l}\) given in Definition 6.4. _Example 6.21_.: If \(\mathbf{S}\) is the \((2,2)\)-bilabelled graph \((\overline{K_{2}},(2,\ldots,1),(1,\ldots,2))\), where \(\overline{K_{2}}\) is the edgeless graph on two vertices, then \(X^{G}_{\mathbf{S}}\) is the swap map \(S\) given in Proposition 6.3. _Remark 6.22_.: Given Chassaniol's result and Example 6.20, the \((0,1)\)-bilabelled graph \(\mathbf{M}^{0,1}\) and the \((2,1)\)-bilabelled graph \(\mathbf{M}^{2,1}\), in particular, will be important in what follows. For a fixed graph \(G\) having \(n\) vertices, we can form a category for the \(G\)-homomorphism matrices, as follows: **Definition 6.23** (Category of \(G\)-Homomorphism Matrices).: For a given graph \(G\) having \(n\) vertices, the category of all \(G\)-homomorphism matrices, \(\mathcal{C}^{G}\), is the category whose objects are the vector spaces \((\mathbb{R}^{n})^{\otimes k}\), and, for any pair of objects \((\mathbb{R}^{n})^{\otimes k}\) and \((\mathbb{R}^{n})^{\otimes l}\), the morphism space \(\operatorname{Hom}_{\mathcal{C}^{G}}((\mathbb{R}^{n})^{\otimes k},(\mathbb{R} ^{n})^{\otimes l})\) is defined to be the \(\mathbb{R}\)-linear span of the set of all \(G\)-homomorphism matrices obtained from all isomorphism classes of \((k,l)\)-bilabelled graphs. The vertical composition of morphisms is given by the usual multiplication of matrices, the tensor product of morphisms is given by the usual Kronecker product of matrices, and the unit object is \(\mathbb{R}\). _Remark 6.24_.: It is easy to show that \(\mathcal{C}^{G}\) is a strict \(\mathbb{R}\)-linear monoidal category. Mancinska and Roberson showed that the operations given on isomorphism classes of bilabelled graphs \(\mathbf{H}\) correspond bijectively with the matrix operations on \(G\)-homomorphism matrices \(X^{G}_{\mathbf{H}}\). This is stated more formally in the following three lemmas: **Lemma 6.25** (Mancinska and Roberson (2020), Lemma 3.21).: _Suppose that \(G\) is a graph having \(n\) vertices. Let \([\mathbf{H_{1}}]\) be the isomorphism class of the \((k,l)\)-bilabelled graph \(\mathbf{H_{1}}=(H_{1},\mathbf{k},\mathbf{l})\), and let \([\mathbf{H_{2}}]\) be the isomorphism class of the \((l,m)\)-bilabelled graph \(\mathbf{H_{2}}=(H_{2},\mathbf{l}^{\prime},\mathbf{m})\)._ _Then_ \[X^{G}_{\mathbf{H_{2}}}X^{G}_{\mathbf{H_{1}}}=X^{G}_{\mathbf{H_{2}}\circ\mathbf{H_{1}}} \tag{12}\] Figure 2: The composition \([\mathbf{H_{2}}]\circ[\mathbf{H_{1}}]\) of the \((2,3)\)–bilabelled graph diagram \(\mathbf{H_{1}}=(H_{1},(3^{\prime},2^{\prime}),(1^{\prime},2^{\prime},3^{\prime}))\) with the \((3,3)\)–bilabelled graph diagram \(\mathbf{H_{2}}=(H_{2},(2,1,2),(1,4,2))\), where \(H_{1}\) is the graph having (relabelled) vertex set \([3^{\prime}]\) and edge set \(\{(1^{\prime},2^{\prime})\}\) and \(H_{2}\) is the graph having vertex set \([4]\) and edge set \(\{(1,4)\}\). The vertices of the resulting bilabelled graph diagram on the RHS would be relabelled, for example, \(\{1,2^{\prime}\}\) would be relabelled to \(1\), and, \(\{2,1^{\prime},3^{\prime}\}\) would be relabelled to \(2\), to give the \((3,3)\)–bilabelled graph diagram \(\mathbf{H}=(H,(2,1),(1,4,2))\), where \(H\) is the graph having vertex set \([4]\) and edge set \(\{(1,4),(1,2)\}\). **Lemma 6.26** (Mancinska and Roberson (2020), Lemma 3.23).: _Suppose that \(G\) is a graph having \(n\) vertices. Let \([\mathbf{H_{1}}]\) be the isomorphism class of the \((k,l)\)-bilabelled graph \(\mathbf{H_{1}}=(H_{1},\mathbf{k},\mathbf{l})\), and let \([\mathbf{H_{2}}]\) be the isomorphism class of the \((q,m)\)-bilabelled graph \(\mathbf{H_{2}}=(H_{2},\mathbf{q},\mathbf{m})\)._ _Then_ \[X^{G}_{\mathbf{H_{1}}}\otimes X^{G}_{\mathbf{H_{2}}}=X^{G}_{\mathbf{H_{1}}\otimes\mathbf{H_{2}}} \tag{13}\] **Lemma 6.27** (Mancinska and Roberson (2020), Lemma 3.24).: _Suppose that \(G\) is a graph having \(n\) vertices, and let \([\mathbf{H}]\) be the isomorphism class of the \((k,l)\)-bilabelled graph \(\mathbf{H}=(H,\mathbf{k},\mathbf{l})\)._ _Then_ \[(X^{G}_{\mathbf{H}})^{*}=X^{G}_{\mathbf{H^{*}}} \tag{14}\] Consequently, we obtain the following theorem, expressed in terms of functors and categories. **Theorem 6.28**.: _Suppose that \(G\) is a graph having \(n\) vertices. Then there exists a full, strict \(\mathbb{R}\)-linear monoidal functor_ \[\mathcal{F}^{G}:\mathcal{G}\to\mathcal{C}^{G} \tag{15}\] _that is defined on the objects of \(\mathcal{G}\) by \(\mathcal{F}^{G}(k)\coloneqq(\mathbb{R}^{n})^{\otimes k}\) and, for any objects \(k,l\) of \(\mathcal{G}\), the map_ \[\operatorname{Hom}_{\mathcal{G}}(k,l)\to\operatorname{Hom}_{\mathcal{C}^{G}} (\mathcal{F}^{G}(k),\mathcal{F}^{G}(l)) \tag{16}\] _is given by_ \[[\mathbf{H}]\mapsto X^{G}_{\mathbf{H}} \tag{17}\] _for all isomorphism classes of \((k,l)\)-bilabelled graphs._ Proof.: See the Technical Appendix. Main Result: A Spanning Set of Matrices for the Learnable, Linear \(\operatorname{Aut}(G)\)-Equivariant Layer Functions Compare the following proposition with Chassaniol's result, given in Theorem 6.6. **Proposition 6.29** (Mancinska and Roberson (2020), Theorem 8.4).: _We have that_ \[\mathcal{G}=\langle[\mathbf{M}^{0,1}],[\mathbf{M}^{2,1}],[\mathbf{A}],[\mathbf{S}]\rangle_{ \circ,\otimes,*} \tag{18}\] _where \(\mathbf{M}^{0,1},\mathbf{M}^{2,1},\mathbf{A},\mathbf{S}\) are the bilabelled graphs defined in Examples 6.19, 6.20, and 6.21, and the operations \(\circ,\otimes,*\) on bilabelled graphs are those given in Definitions 6.11, 6.13 and 6.14, respectively._ We have come to the main result of this paper. **Theorem 6.30**.: _Suppose that \(G\) is a graph having \(n\) vertices. Then the vector space of all \(\operatorname{Aut}(G)\)-equivariant, linear maps between tensor power spaces of \(\mathbb{R}^{n}\), \(\operatorname{Hom}_{\operatorname{Aut}(G)}((\mathbb{R}^{n})^{\otimes k},( \mathbb{R}^{n})^{\otimes l})\), when the standard basis of \(\mathbb{R}^{n}\) is chosen, is spanned by all \(G\)-homomorphism matrices \(X^{G}_{\mathbf{H}}\) obtained from all isomorphism classes of \((k,l)\)-bilabelled graphs._ Proof.: The statement and proof of this theorem is adapted from Mancinska and Roberson (2020), Theorem 8.5. We know, by Chassaniol, that \[\mathcal{C}(\operatorname{Aut}(G))=\langle M^{0,1},M^{2,1},A_{G},S\rangle_{+, \circ,\otimes,*} \tag{19}\] By Examples 6.19, 6.20, and 6.21, we get that \(\mathcal{C}(\operatorname{Aut}(G))\) is equal to \[\{X^{G}_{\mathbf{H}}\mid[\mathbf{H}]\in\langle[\mathbf{M}^{0,1}],[\mathbf{M}^{2,1}],[\mathbf{A}], [\mathbf{S}]\rangle_{+,\circ,\otimes,*}\} \tag{20}\] and so, by Proposition 6.29, we have that \[\mathcal{C}(\operatorname{Aut}(G))=\mathbb{R}\text{-span}\{X^{G}_{\mathbf{H}}\mid [\mathbf{H}]\in\mathcal{G}\} \tag{21}\] Consequently, for any \(k,l\), we get that \[\mathcal{C}(\operatorname{Aut}(G))(k,l)=\mathbb{R}\text{-span}\{X^{G}_{\mathbf{H}} \mid[\mathbf{H}]\in\mathcal{G}(k,l)\} \tag{22}\] As the LHS of (22) is equivalent notation for \(\operatorname{Hom}_{\operatorname{Aut}(G)}((\mathbb{R}^{n})^{\otimes k},( \mathbb{R}^{n})^{\otimes l})\), we obtain our result. _Remark 6.31_.: In particular, Theorem 6.30 shows that \(\mathcal{C}^{G}=\mathcal{C}(\operatorname{Aut}(G))\), and that the objects of the category, \((\mathbb{R}^{n})^{\otimes k}\), are actually representations of \(\operatorname{Aut}(G)\), as defined in Section 5. _Remark 6.32_.: It is very important to note the following. If \(H\) is a group that is isomorphic to the automorphism group of a graph \(G\) having \(n\) vertices, then the spanning set that we obtain for \(\operatorname{Hom}_{H}((\mathbb{R}^{n})^{\otimes k},(\mathbb{R}^{n})^{\otimes l})\) using Theorem 6.30 depends not only on how the automorphism group is embedded in the symmetric group \(S_{n}\) (itself thought of as matrices in \(GL(n)\) having chosen the standard basis of \(\mathbb{R}^{n}\)), which is given by how the vertices of the underlying graph \(G\) are labelled (up to all automorphisms), but also by what the edges are in \(G\). We show in the Technical Appendix, for example, that \(D_{4}\) has three embeddings in \(S_{4}\), but that each embedding can be obtained separately from two graphs that are the complement of each other, both having the same labelling of the vertices, and all six instances (an embedding of \(D_{4}\) coming from a graph with a certain labelling of its vertices) give rise to different, albeit related, bases of \(\operatorname{Hom}_{D_{4}}(\mathbb{R}^{4},\mathbb{R}^{4})\), that is, a basis for the specific embedding and labelled graph chosen. Theorem 6.30 states that, by considering all isomorphism classes of \((k,l)\)-bilabelled graphs, we can find a spanning set for \(\operatorname{Hom}_{\operatorname{Aut}(G)}((\mathbb{R}^{n})^{\otimes k},( \mathbb{R}^{n})^{\otimes l})\). However, by applying the following result, we can reduce the number of elements that appear in the spanning set by reducing the number of isomorphism classes that we need to consider. **Proposition 6.33**.: _Let \(\mathbf{H}\) be a \((k,l)\)-bilabelled graph diagram whose underlying graph \(H\) contains a subset of free vertices that, whilst they may have edges amongst themselves, are entirely disconnected from any vertices that are in a connected component containing a non-free vertex._ _Then \(X^{G}_{\mathbf{H}}\) is a scalar multiple of \(X^{G}_{\mathbf{H}^{\prime}}\), where \(\mathbf{H}^{\prime}\) is the \((k,l)\)-bilabelled graph diagram obtained from \(\mathbf{H}\) having the subset removed._ Proof.: See the Technical Appendix. We also have the following useful result: **Proposition 6.34** (Frobenius Duality).: _Suppose that we use Theorem 6.30 to obtain a spanning set for \(\operatorname{Hom}_{\operatorname{Aut}(G)}((\mathbb{R}^{n})^{\otimes k},( \mathbb{R}^{n})^{\otimes l})\). Then we can immediately obtain a spanning set for \(\operatorname{Hom}_{\operatorname{Aut}(G)}((\mathbb{R}^{n})^{\otimes q},( \mathbb{R}^{n})^{\otimes m})\), for any \(q,m\geq 0\) such that \(q+m=k+l\)._ Proof.: See the Technical Appendix. _Remark 6.35_.: In the Technical Appendix, we show that we can recover the diagram basis for \(\operatorname{Hom}_{S_{n}}((\mathbb{R}^{n})^{\otimes k},(\mathbb{R}^{n})^{ \otimes l})\) by looking at homomorphism matrices for the complement of the complete graph on \(n\) vertices, \(\overline{K_{n}}\). This result immediately implies that, for a graph \(G\) having \(n\) vertices, the spanning set of \(\operatorname{Hom}_{\operatorname{Aut}(G)}((\mathbb{R}^{n})^{\otimes k},( \mathbb{R}^{n})^{\otimes l})\), given in (22), automatically includes the image, under \(\mathcal{F}^{G}\), of all set partitions of \([l+k]\) having at most \(n\) blocks in their equivalent \((k,l)\)-bilabelled graph diagram form, since \(\operatorname{Aut}(G)\) is a subgroup of \(S_{n}\). _Remark 6.36_.: We have provided a number of examples for how to calculate a spanning set of \(\operatorname{Hom}_{\operatorname{Aut}(G)}((\mathbb{R}^{n})^{\otimes k},( \mathbb{R}^{n})^{\otimes l})\) for different graphs \(G\) and for low order tensor powers of \(\mathbb{R}^{n}\) in the Technical Appendix. _Remark 6.37_.: We can adapt the characterisation for the case where the feature dimension of the layers is greater than one, and we can include bias terms in the layer functions themselves such that the equivariance property is maintained. See the Technical Appendix for more details. _Remark 6.38_.: We appreciate that there will be some technical challenges when implementing the neural networks that we have characterised given the current state of computer hardware. We discuss this in more detail in the Technical Appendix. ## 7 Related Work As stated in the Introduction, our work has been heavily influenced by the paper of Mancinska and Roberson (2020), who proved that two graphs are quantum isomorphic if and only if they have the same number of graph homomorphisms from any planar graph. They themselves built upon the work of Chassaniol (2019), who, by studying the quantum automorphism group of a graph, showed that a vertex-transitive graph has no quantum symmetries. These results are quantum adaptations of the work of Lovasz (1967), who showed that two graphs are isomorphic if and only if they have the same number of graph homomorphisms from any graph. The work by Zaheer et al. (2017), who constructed the first permutation equivariant neural network for learning from sets, instigated the search to find and classify the linear layer functions appearing in other group equivariant neural networks. Maron et al. (2019) classified the linear layer functions in permutation equivariant neural networks where the layers are some tensor power of \(\mathbb{R}^{n}\) by finding the orbit basis. Godfrey et al. (2023) determined that using the diagram basis instead is more beneficial for performing computations with the linear layer functions in such networks. Finzi et al. (2021) developed a numerical algorithm that enabled them to find a basis for the equivariant linear layer functions for the orthogonal group \(O(n)\), the symplectic group \(Sp(n)\), and the special orthogonal group \(SO(n)\), for small values of \(n\) and for low order tensor power spaces of \(\mathbb{R}^{n}\). However, the approach taken in this paper to characterise all of the learnable, linear, equivariant layer functions \(\operatorname{Hom}_{\operatorname{Aut}(G)}((\mathbb{R}^{n})^{\otimes k},( \mathbb{R}^{n})^{\otimes l})\) is closest in spirit to the one seen in the papers written by Pearce-Crump (2022, 2023a, 2023b). They used various sets of set partition diagrams to characterise all of the learnable, linear, equivariant layer functions in \(\operatorname{Hom}_{G}((\mathbb{R}^{n})^{\otimes k},(\mathbb{R}^{n})^{\otimes l})\) when \(G\) is any of the following groups: the symmetric group \(S_{n}\), the alternating group \(A_{n}\), the orthogonal group \(O(n)\), the symplectic group \(Sp(n)\), and the special orthogonal group \(SO(n)\). ## 8 Conclusion We are the first to show how the combinatorics underlying bilabelled graphs provides the theoretical background for constructing neural networks that are equivariant to the automorphism group of a graph having \(n\) vertices when the layers are some tensor power of \(\mathbb{R}^{n}\). We found the form of the learnable, linear, \(\operatorname{Aut}(G)\)-equivariant layer functions between such tensor power spaces in the standard basis of \(\mathbb{R}^{n}\) by finding a spanning set for the \(\operatorname{Hom}\)-spaces in which these layer functions live. However, given that the number of isomorphism classes of \((k,l)\)-bilabelled graphs increases exponentially, both as the number of vertices in the graph \(G\) increases and as \(k\) and \(l\) increase, resulting in the number of spanning set elements increasing exponentially, it would be useful to find further ways of reducing the number of isomorphism classes of \((k,l)\)-bilabelled graphs that we need to consider. We leave this to future work. ## Acknowledgements The author would like to thank his PhD supervisor Professor William J. Knottenbelt for being generous with his time throughout the author's period of research prior to the publication of this paper. This work was funded by the Doctoral Scholarship for Applied Research which was awarded to the author under Imperial College London's Department of Computing Applied Research scheme. This work will form part of the author's PhD thesis at Imperial College London.
2302.05601
Pruning Deep Neural Networks from a Sparsity Perspective
In recent years, deep network pruning has attracted significant attention in order to enable the rapid deployment of AI into small devices with computation and memory constraints. Pruning is often achieved by dropping redundant weights, neurons, or layers of a deep network while attempting to retain a comparable test performance. Many deep pruning algorithms have been proposed with impressive empirical success. However, existing approaches lack a quantifiable measure to estimate the compressibility of a sub-network during each pruning iteration and thus may under-prune or over-prune the model. In this work, we propose PQ Index (PQI) to measure the potential compressibility of deep neural networks and use this to develop a Sparsity-informed Adaptive Pruning (SAP) algorithm. Our extensive experiments corroborate the hypothesis that for a generic pruning procedure, PQI decreases first when a large model is being effectively regularized and then increases when its compressibility reaches a limit that appears to correspond to the beginning of underfitting. Subsequently, PQI decreases again when the model collapse and significant deterioration in the performance of the model start to occur. Additionally, our experiments demonstrate that the proposed adaptive pruning algorithm with proper choice of hyper-parameters is superior to the iterative pruning algorithms such as the lottery ticket-based pruning methods, in terms of both compression efficiency and robustness.
Enmao Diao, Ganghua Wang, Jiawei Zhan, Yuhong Yang, Jie Ding, Vahid Tarokh
2023-02-11T04:52:20Z
http://arxiv.org/abs/2302.05601v3
# Pruning Deep Neural Networks from a Sparsity Perspective ###### Abstract In recent years, deep network pruning has attracted significant attention in order to enable the rapid deployment of AI into small devices with computation and memory constraints. Pruning is often achieved by dropping redundant weights, neurons, or layers of a deep network while attempting to retain a comparable test performance. Many deep pruning algorithms have been proposed with impressive empirical success. However, existing approaches lack a quantifiable measure to estimate the compressibility of a sub-network during each pruning iteration and thus may under-prune or over-prune the model. In this work, we propose PQ Index (PQI) to measure the potential compressibility of deep neural networks and use this to develop a Sparsity-informed Adaptive Pruning (SAP) algorithm. Our extensive experiments corroborate the hypothesis that for a generic pruning procedure, PQI decreases first when a large model is being effectively regularized and then increases when its compressibility reaches a limit that appears to correspond to the beginning of underfitting. Subsequently, PQI decreases again when the model collapse and significant deterioration in the performance of the model start to occur. Additionally, our experiments demonstrate that the proposed adaptive pruning algorithm with proper choice of hyper-parameters is superior to the iterative pruning algorithms such as the lottery ticket-based pruning methods, in terms of both compression efficiency and robustness. Our code is available here. ## 1 Introduction Over-parameterized deep neural networks have been applied with enormous success in a variety of fields, including computer vision (Krizhevsky et al., 2012; He et al., 2016; Redmon et al., 2016), natural language processing (Devlin et al., 2018; Radford et al., 2018), audio signal processing (Oord et al., 2016; Schneider et al., 2019; Wang et al., 2020), and distributed learning (Konecny et al., 2016; Ding et al., 2022; Diao et al., 2022). These deep neural networks have significantly expanded in size. For example, LeNet-5 (LeCun et al., 1998) (1998; image classification) has 60 thousand parameters whereas GPT-3 (Brown et al., 2020) (2020; language modeling) has 175 billion parameters. This rapid growth in size has necessitated the deployment of a vast amount of computation, storage, and energy resources. Due to hardware constraints, these enormous model sizes may be a barrier to deployment in some edge devices such as mobile phones and virtual assistants. This has greatly increased interest in deep neural network compression/pruning. To this end, various researchers have developed empirical methods of building much simpler networks with similar performance based on pre-trained networks (Han et al., 2015; Frankle and Carbin, 2018). For example, Han et al. (2015) demonstrated that AlexNet (Krizhevsky et al., 2017) could be compressed to retain only \(3\%\) of the original parameters on the ImageNet dataset without impacting classification accuracy. An important topic of interest is the determination of limits of network pruning. An overly pruned model may not have enough expressivity for the underlying task, which may lead to significant performance deterioration (Ding et al., 2018). Existing methods generally monitor the prediction performance on a validation dataset and terminate pruning when the performance falls below a pre-specified threshold. Nevertheless, a quantifiable measure for estimating the compressibility of a sub-network during each pruning iteration is desired. Such quantification of compressibility can lead to the discovery of the most parsimonious sub-networks without performance degradation. In this work, we connect the compressibility and performance of a neural network to its sparsity. In a highly over-parameterized network, one popular assumption is that the relatively small weights are considered redundant or non-influential and may be pruned without impacting the performance. Let us consider the sparsity of a non-negative vector \(w=[w_{1},\ldots,w_{d}]\), since sparsity is related only to the magnitudes of entries. Suppose \(S(w)\) is a sparsity measure, and a larger value indicates higher sparsity. Hurley and Rickard (2009) summarize six properties that an ideal sparsity measure should have, originally proposed in economics (Dalton, 1920; Rickard and Fallon, 2004). They are * Robin Hood. For any \(w_{i}>w_{j}\) and \(\alpha\in(0,(w_{i}-w_{j})/2)\), we have \(S([w_{1},\ldots,w_{i}-\alpha,\ldots,w_{j}+\alpha,\ldots,w_{d}])<S(w)\). * Scaling. \(S(\alpha w)=S(w)\) for any \(\alpha>0\). * Rising Tide. \(S(w+\alpha)<S(w)\) for any \(\alpha>0\) and \(w_{i}\) not all the same. * Cloning. \(S(w)=S([w,w])\). * Bill Gates. For any \(i=1,\ldots,d\), there exists \(\beta_{i}>0\) such that for any \(\alpha>0\) we have \[S([w_{1},\ldots,w_{i}+\beta_{i}+\alpha,\ldots,w_{d}])>S([w_{1},\ldots,w_{i}+ \beta_{i},\ldots,w_{d}]).\] * Babies. \(S([w_{1},\ldots,w_{d},0])>S(w)\) for any non-zero \(w\). Hurley and Rickard (2009) point out that only Gini index satisfies all six criteria among a comprehensive list of sparsity measures. In this work, we propose a measure of sparsity named PQ Index (PQI). To the best of our knowledge, PQI is the first measure related to the norm of a vector that satisfies all the six properties above. Therefore, PQI is an ideal indicator of vector sparsity and is of its own interest. We suggest using PQI to infer the compressibility of neural networks. Furthermore, we discover the relationship between the performance and sparsity of iteratively pruned models as illustrated in Figure 1. Our hypothesis is that _for a generic pruning procedure, the sparsity will first decrease when a large model is being effectively regularized, then increase when its compressibility reaches a limit that corresponds to the start of underfitting, and finally decrease when the model collapse occurs, i.e., the model performance significantly deteriorates._ Our intuition is that the pruning will first remove redundant parameters. As a result, the sparsity of model parameters will decrease and the performance may be improved due to regularization. When the model is further compressed, part of the model parameters will become smaller when the Figure 1: An illustration of our hypothesis on the relationship between sparsity and compressibility of neural networks. The width of connections denotes the magnitude of model parameters. model converges. Thus, the sparsity will increase and the performance will moderately decrease. Finally, when the model collapse starts to occur, all attenuated parameters are removed and the remaining parameters become crucial to maintain the performance. Therefore, the sparsity will decrease and performance will significantly deteriorate. Our extensive experiments on pruning algorithms corroborate the hypothesis. Consequently, PQI can infer whether a model is inherently compressible. Motivated by this discovery, we also propose the Sparsity-informed Adaptive Pruning (SAP) algorithm, which can compress more efficiently and robustly compared with iterative pruning algorithms such as the lottery ticket-based pruning methods. Overall, our work presents a new understanding of the inherent structures of deep neural networks for model compression. Our main contributions are summarized below. 1. We propose a new notion of sparsity for vectors named PQ Index (PQI), with a larger value indicating higher sparsity. We prove that PQI meets all six properties proposed by (Dalton, 1920; Rickard & Fallon, 2004), which capture the principles a sparsity measure should obey. Among 15 commonly used sparsity measures, the only other measure satisfying all properties is Gini Index (Hurley & Rickard, 2009). Thus, norm-based PQI is an ideal sparsity/equity measure and may be of independent interest to many areas, e.g., signal processing and economics. 2. We develop a new perspective on the compressibility of neural networks. In particular, we measure the sparsity of pruned models by PQI and postulate the above hypothesis on the relationship between sparsity and compressibility of neural networks. 3. Motivated by our proposed PQI and hypothesis, we further develop a Sparsity-informed Adaptive Pruning (SAP) algorithm that uses PQI to choose the pruning ratio adaptively. In particular, the pruning ratio at each iteration is decided based on a PQI-related inequality. In contrast, Gini Index does not have such implications for the pruning ratio. 4. We conduct extensive experiments to measure the sparsity of pruned models and corroborate our hypothesis. Our experimental results also demonstrate that SAP with proper choice of hyper-parameters can compress more efficiently and robustly compared with iterative pruning algorithms such as the lottery ticket-based pruning methods. ## 2 Related Work Model compressionThe goal of model compression is to find a smaller model that has comparable performance to the original model. A smaller model saves storage and computation resources, boosts training, and facilitates the deployment of the model to devices with limited capacities, such as mobile phones and virtual assistants. Therefore, model compression is vital for deploying deep neural networks with millions or even billions of parameters. Various model compression methods have been proposed for neural networks. Among them, pruning is one of the most popular and effective approach (LeCun et al., 1989; Hagiwara, 1993; Han et al., 2015; Hu et al., 2016; Luo et al., 2017; Frankle & Carbin, 2018; Lee et al., 2018; He et al., 2017). The idea of pruning is the sparsity assumption that many redundant or non-influential neuron connections exist in an over-parameterized model. Thus, we can remove those connections (e.g., weights, neurons, or neuron-like structures such as layers) without sacrificing much test accuracy. Two critical components of pruning algorithms are a pruning criterion that decides which connection to be pruned and a stop criterion that determines when to stop pruning and thus prevent underfitting and model collapse. There are many pruning criteria motivated by different interpretations of redundancy. A widely-used criterion removes the parameters with the smallest magnitudes, assuming that they are less important (Hagiwara, 1993; Han et al., 2015). Besides magnitude-based pruning, one may prune the parameters based on their sensitivity or contribution to the network output (LeCun et al., 1989; Lee et al., 2018; Hu et al., 2016; Soltani et al., 2021) or restrict different model components to share a large proportion of neural weights (Diao et al., 2019, 2021). As for the stop criterion, the common choice is validation: to stop pruning once the test accuracy on a validation dataset falls below a given threshold. While pruning is a post-processing method that requires a pre-trained model, there are also pre-processing and in-processing methods based on the sparsity assumption. For example, one can add explicit sparse constraints on the network, such as forcing the parameters to have a low-rank structure and sharing weights. Alternatively, one can implicitly force the trained model to be sparse, such as adding a sparsity penalty (e.g., \(\ell_{1}\)-norm) to the parameters. In contrast to those sparsity-based compression methods, which find a sub-network of the original one, researchers have also proposed compressing the model by finding a smaller model with a different architecture. The efforts include knowledge distillation (Hinton et al., 2015) and architecture search (Mushtaq et al., 2021). We refer the reader to (Hoefler et al., 2021) for a comprehensive survey of model compression. Theory of model compressionIn practice, the compressibility of a model depends on the network architecture and learning task. The pruning usually involves a lot of ad hoc hyper-parameter fine-tuning. Thus, an understanding of model compressibility is urgently needed. There have been some recent works to show the existence of or find a sub-network with guaranteed performance. Arora et al. (2018) show that a model is more compressible if it is more stable to the noisy inputs and provides a generalization error bound of the pruned model. Yang et al. (2022) propose a backward pruning algorithm inspired by approximating functions using \(\ell_{q}\)-norm (Wang et al., 2014), and quantify its generalization error. Baykal et al. (2018); Mussay et al. (2019) utilize the concept of coreset to prove the existence of a pruned network with similar performance. The main idea is to sample the parameters based on their importance, and thus selected parameters could preserve the output of the original network. Ye et al. (2020) propose a greedy selection algorithm to reconstruct a network and bound the generalization error for two-layer neural networks. Our work develops a new perspective on the compressibility of neural networks by directly measuring the sparsity of pruned models to reveal the relationship between the sparsity and compressibility of neural networks. Sparsity measureSparsity is a crucial concept in many fundamental fields such as statistics and signal processing (Tibshirani, 1996; Donoho, 2006; Akcakaya & Tarokh, 2008). Intuitively, sparsity means that the most energy is concentrated in a few elements. For example, a widely-used assumption in high-dimensional machine learning is that the model has an underlying sparse representation. Various sparsity measures have been proposed in the literature from different angles. One kind of sparsity measure originates from sociology and economics. For example, the well-known Gini Index (Gini, 1912) can measure the inequality in a population's wealth or welfare distribution. A highly wealth-concentrated population forms a sparse vector if the vector consists of the wealth of each person. In addition, to measure the diversity in a group, entropy-based measures like Shannon entropy and Gaussian entropy are often used (Jost, 2006). Another kind of sparsity measure has been studied in mathematics and engineering for a long time. A classic measure is the hard sparsity, also known as \(\ell_{0}\)-norm, which is the number of non-zero elements in \(w\). A small hard sparsity implies that only a few vector elements are active or effective. However, a slight change in the zero-valued element may cause a significant increase in the hard sparsity, which can be undesirable. Thus, its relaxations such as \(\ell_{p}\)-norm (\(0<p\leq 1\)) are also widely used. For example, \(\ell_{1}\)-norm-based constraints or penalties are used for function approximation (Barron, 1993), model regularization and variable selection (Tibshirani, 1996; Chen et al., 2001). Our work proposes the first measure of sparsity related to vector norms that satisfies all the properties shared by the Gini Index (Hurley & Rickard, 2009) and an adaptive pruning algorithm based on our proposed measure of sparsity. ## 3 Pruning with PQ Index ### PQ Index We will prove all the six properties (D1)-(D4) and (P1), (P2), which are mentioned in the introduction, hold for our proposed PQ Index (PQI). **Definition 1** (PQ Index).: _For any \(0<p<q\), the PQ Index of a non-zero vector \(w\in\mathbb{R}^{d}\) is_ \[\mathrm{I}_{p,q}(w)=1-d^{\frac{1}{q}-\frac{1}{p}}\frac{\|w\|_{p}}{\|w\|_{q}}, \tag{1}\] _where \(\|w\|_{p}=(\sum_{i=1}^{d}|w_{i}|^{p})^{1/p}\) is the \(\ell_{p}\)-norm of \(w\) for any \(p>0\). For simplicity, we will use \(\mathrm{I}(w)\) and drop the dependency on \(p\) and \(q\) when the context is clear._ **Theorem 1**.: _We have \(0\leq\mathrm{I}_{p,q}(w)\leq 1-d^{\frac{1}{q}-\frac{1}{p}}\), and a larger \(\mathrm{I}_{p,q}(w)\) indicates a sparser vector. Furthermore, \(\mathrm{I}_{p,q}(w)\) satisfies all the six properties (D1)-(D4) and (P1), (P2)._ **Remark 1** (Sanity check).: _For the densest or most equal situation, we have \(w_{i}=c\) for \(i=1,\ldots,d\), where \(c\) is a non-zero constant. It can be verified that \(\mathrm{I}_{p,q}(w)=0\). In contrast, the sparsest or most unequal case is that \(w_{i}\)'s are all zeros except one of them, and corresponding \(\mathrm{I}_{p,q}(w)=1-d^{\frac{1}{q}-\frac{1}{p}}\). Note that \(\mathrm{I}(w)\) for an all-zero vector is not defined. From the perspective of the number of important elements, an all-zero vector is sparse; however, it is dense from the aspect of energy distribution._ **Remark 2** (Insights).: _The form of \(\text{I}_{p,q}\) is not a random thought but inherently driven by properties (D1)-(D4). Why do we need the ratio of two norms? It is essentially decided by the requirement of (D2) Scaling. If \(S(w)\) involves only a single norm, then \(S(w)\) is not scale-invariant. However, since \(\ell_{r}\)-norm is homogeneous for all \(r>0\), the ratio of two norms is inherently scale-invariant. Why is there an additional scaling constant \(d^{\frac{1}{4}-\frac{1}{p}}\)? This is necessary to satisfy (D4) Cloning. Inspired by the well-known Root Mean Squared Error (RMSE), we found out that the additional scaling constant is the correct term to help \(\text{I}_{p,q}\) be independent of the vector length. It is essentially appealing for comparing the sparsity of neural networks with different model parameters. Why do we require \(p<q\)? We find it plays a central role in meeting (D1) and (D3). The insight is that \(\|w\|_{p}\) decreases faster than \(\|w\|_{q}\) when a vector becomes sparser, thus guaranteeing a larger PQ Index._ **Theorem 2** (PQI-bound on pruning).: _Let \(M_{r}\) denote the set of \(r\) indices of \(w\) with the largest magnitudes, and \(\eta_{r}\) be the smallest value such that \(\sum_{i\notin M_{r}}|w_{i}|^{p}\leq\eta_{r}\sum_{i\in M_{r}}|w_{i}|^{p}\). Then, we have_ \[r\geq d(1+\eta_{r})^{-q/(q-p)}[1-\text{I}(w)]^{\frac{q\cdot p}{q-p}}. \tag{2}\] **Remark 3**.: _The PQI-bound is inspired by Yang et al. (2022) that proposed to use \(\|w\|_{1}/\|w\|_{q},q\in(0,1)\) as a measure of sparsity. We use similar techniques to derive the bound based on our proposed PQ Index. It is worth mentioning that \(\|w\|_{1}/\|w\|_{q}\) does not satisfy properties (D2), (D4), (P1), and (P4), which an ideal sparsity measure should have Hurley & Rickard (2009). Consequently, we cannot use it to compare the sparsity of models of different sizes. The merit of the PQI-bound is that it applies to iterative pruning of models that involve different numbers of model parameters._ Recall that the pruning is based on the assumption that parameters with small magnitudes are removable. Therefore, suppose we know \(\text{I}(w)\) and \(\eta_{r}\), then we immediately have a lower bound for the retaining ratio of the pruning from Theorem 2. Thus, we can adaptively choose the pruning ratio based on \(\text{I}(w)\) and \(\eta_{r}\), which inspires our Sparsity-informed Adaptive Pruning (SAP) algorithm in the following subsection. In practice, \(\eta_{r}\) is unavailable before we decide the pruning ratio. Therefore, we treat it as a hyper-parameter in our experiments. Since \(\eta_{r}\) is non-increasing with respect to \(r\), a larger \(\eta_{r}\) means that we assume the model is more compressible and leads to a higher pruning ratio. Experiments show that it is safe to choose \(\eta_{r}=0\). ### Sparsity-informed Adaptive Pruning In this section, we introduce the Sparsity-informed Adaptive Pruning (SAP) algorithm as illustrated in Algorithm 1. Our algorithm is based on the well-known lottery ticket pruning method (Frankle and Carbin, 2018). The lottery ticket pruning algorithm proposes to prune and retrain the model iteratively. Compared with one shot pruning algorithm, which does not retrain the model at each pruning iteration, the lottery ticket pruning algorithm produces pruned models of better performance with the same percent of remaining model parameters. However, both methods use a fixed pruning ratio \(P\) at each pruning iteration. As a result, they may under-prune or over-prune the model at earlier or later pruning iterations, respectively. The under-pruned models with spare compressibility require more pruning iterations and computation resources to obtain the smallest neural networks with satisfactory performance. The over-pruned model suffers from underfitting due to insufficient compressibility to maintain the desired performance. Therefore, we propose SAP to adaptively determine the number of pruned parameters at each iteration based on the PQI-bound derived in Formula 2. Furthermore, we introduce two additional hyper-parameters, the scaling factor \(\gamma\) and the maximum pruning ratio \(\beta\), to make our algorithm further flexible and applicable. Next, we walk through our SAP algorithm. Before the pruning starts, we randomly generate model parameters \(w_{\text{init}}\). We will use \(w_{\text{init}}\) to initialize retained model parameters at each pruning iterations. The lottery ticket pruning algorithm shows that the performance of retraining the subnetworks from \(w_{\text{init}}\) is better than from scratch. Then, we initialize the mask \(m_{0}\) with all ones. Suppose we have \(T\) number of pruning iterations. For each pruning iteration \(t=0,1,2,\ldots T\), we first initialize the model parameters \(\tilde{w}_{t}\) and compute the number of model parameters \(d_{t}\) as follows \[\tilde{w}_{t}=w_{\text{init}}\odot m_{t},\quad d_{t}=|m_{t}|, \tag{3}\] where \(\odot\) is the Hadamard product. After training the model parameters \(\tilde{w}_{t}\) with \(m_{t}\) by freezing the gradient for \(E\) epoch, we arrive at trained model parameters \(w_{t}\). Upon this point, our algorithm has no difference from the classical lottery ticket pruning algorithm. The lottery ticket pruning algorithm will then prune \(d\cdot P\) parameters from \(m_{t}\) according to the magnitude of \(w_{t}\) and finally create new mask \(m_{t+1}\). After arriving at \(w_{t}\), our proposed SAP will compute the PQ Index, denoted by \(\text{I}(w_{t})\), and the lower bound of the number of retrained model parameters, denoted by \(r_{t}\), as follows \[\text{I}(w_{t})=1-d_{t}^{\frac{1}{q}-\frac{1}{p}}\frac{\|w_{t}\|_{p}}{\|w_{t}\|_{ q}},\qquad r_{t}=d_{t}(1+\eta_{r})^{-q/(q-p)}[1-\text{I}(w_{t})]^{\frac{q\pi}{q-p}}. \tag{4}\] Then, we compute the number of pruned model parameters \(c_{t}=\lfloor d_{t}\cdot\min(\gamma(1-\frac{r_{t}}{d_{t}}),\beta)\rfloor\). Here, we introduce \(\gamma\) to accelerate or decelerate the pruning at initial pruning iterations. Specifically, \(\gamma>1\) or \(<1\) encourages the pruning ratio to be larger or smaller than the pruning ratio derived from the PQI-bound, respectively. As the retraining of pruned models is time-consuming, it is appealing to efficiently obtain the smallest pruned model with satisfactory performance for a small number of pruning iterations. In addition, we introduce the maximum pruning ratio \(\beta\) to avoid excessive pruning because we will compress more than the PQI-bound if \(\gamma>1\) and may completely prune all model parameters. If we set \(\gamma=1\), \(\beta\) can be safely omitted. In our experiments, we set \(\beta=0.9\) only to provide minimum protection from excessive pruning at each pruning iteration. Finally, we prune \(c_{t}\) model parameters with the smallest magnitude based on \(w_{t}\) and \(m_{t}\), and finally create new mask \(m_{t+1}\) for the next pruning iteration. ## 4 Experimental Studies ### Experimental Setup We conduct experiments with FashionMNIST (Xiao et al., 2017), CIFAR10, CIFAR100 (Krizhevsky et al., 2009), and TinyImageNet (Le and Yang, 2015) datasets. Our backbone models are Linear, Multi-Layer Perceptron (MLP), Convolutional Neural Network (CNN), ResNet18, ResNet50 (He et al., 2016), and Wide ResNet28x8 (WResNet28x8) (Zagoruyko and Komodakis, 2016). We run experiments for \(T=30\) pruning iterations with Linear, MLP, and CNN, and \(T=15\) pruning iterations with ResNet18. We compare the proposed SAP with two baselines, including 'One Shot' and 'Lottery Ticket' (Frankle and Carbin, 2018) pruning algorithms. The difference between 'One Shot' and 'Lottery Ticket' pruning algorithms is that 'One Shot' prunes \(d\cdot P\) model parameters at each pruning iteration from \(w_{0}\) instead of \(w_{t}\). We have \(P=0.2\) throughout our experiments. We compare the proposed PQ Index (\(p=0.5\), \(q=1.0\)) with the well-known Gini Index (Gini, 1912) to validate its effectiveness in evaluating sparsity. Furthermore, we perform pruning on various pruning scopes, including 'Neuron-wise Pruning,' 'Layer-wise Pruning,' and 'Global Pruning.' In particular, 'Global Pruning' gather all model parameters as a vector for pruning, while 'Neuron-wise Pruning' and 'Layer-wise Pruning' prune each neuron and layer of model parameters separately. The number of neurons at each layer is equal to the output size of that layer. For example, 'One Shot' and 'Lottery Ticket methods with 'Neuron-wise Pruning' prune \(d_{i}\cdot P\) model parameters of each neuron, where \(d_{i}\) refers to the size of each neuron. Similarly, SAP computes the PQ Index and the number of pruned model parameters for neuron \(i\). Details of the model architecture and learning hyper-parameters are included in the Appendix. We conducted four random experiments with different seeds, and the standard deviation is shown in the error bar of all figures. Further experimental results can be found in the Appendix. (\(p=1.0\), \(q=2.0\)) in Figure 3(a). The results show that an ideal pruning procedure should avoid a rapid increase in sparsity. **Pruned models** We demonstrate the results of pruned models at each pruning iteration in Figure 2(b) and 3(b). In particular, we illustrate the performance, performance difference, PQ Index, and PQ Index difference at each pruning iteration. The performance and PQ Index are computed directly from the pruned models without retraining. The performance and PQ Index difference is between the retrained (\(w_{0}\odot m_{t}\) for 'One Shot' and \(w_{t}\odot m_{t}\) for 'Lottery Ticket' and SAP) and pruned models (\(w_{0}\odot m_{t+1}\) for 'One Shot' and \(w_{t}\odot m_{t+1}\) for 'Lottery Ticket' and SAP). The results of performance difference show that the performance of pruned models without retraining from SAP can perform close to that of retrained models. Furthermore, the sparsity of pruned models shows that iterative pruning generally decreases the sparsity of pruned models. Meanwhile, the sparsity difference of pruned models provides a sanity check by showing that pruning at each pruning iteration decreases the sparsity of retrained models. **Pruning scopes** We demonstrate various pruning scopes regarding compression trade-off, layer-wise percent of remaining weights, and layer-wise PQ Index in Figure 4. The results of the compression trade-off show that SAP with 'Global Pruning' may perform worse than 'One Shot' and 'Lottery Ticket' when the percent of remaining weights is small. As illustrated in the 'Global Pruning' of Figure 4(b), the first layer has not been pruned enough. It is because SAP with 'Global Pruning' measures the sparsity of all model parameters in a vector, and the magnitude of the parameters of the first layer and other layers may not be at the same scale. As a result, the parameters of the first layer will not be pruned until late pruning iterations. However, SAP with 'Neuron-wise Pruning' and 'Layer-wise Pruning' perform better than 'One Shot' and 'Lottery Ticket'. SAP can adaptively adjust the pruning ratio of each neuron and layer, but 'One Shot' and 'Lottery Ticket' may over-prune specific neurons and layers because they adopt a fixed pruning ratio. Interestingly, the parameters of the first layer are pruned more aggressively by 'Neuron-wise Pruning' than 'Global Pruning' in the Figure 4: Results of various pruning scopes regarding (a) compression trade-off, (b) layer-wise percent of remaining weights, and (c) layer-wise PQ Index for CIFAR10 and CNN. (b, c) are performed with SAP (\(p=0.5\), \(q=1.0\)). early pruning iterations. However, they are not pruned by 'Neuron-wise Pruning' in the late pruning iterations, while 'Global Pruning' still prunes them aggressively. It aligns with the intuition that the initial layers of CNN are more important to maintain the performance, e.g., Gale et al. (2019) observed that the first layer was often more important to model quality and pruned less than other layers. Furthermore, the PQ Index of 'Neuron-wise Pruning' is also more stable than the other two pruning scopes, which indicates that 'Neuron-wise Pruning' is more appropriate for SAP, as the PQ Index is computed more precisely. **Ablation studies** We demonstrate ablation studies of \(p\) and \(q\) in Figure 5. In Figure 5(a), we fix \(q=1.0\) and study the effect of \(p\). The results show that SAP prunes more aggressively when \(q=1.0\) and \(p\) is close to \(q\). In Figure 5(b), we fix \(p=1.0\) and study the effect of \(q\). The results show that SAP prunes more aggressively when \(p=1.0\) and \(q\) is distant from \(p\). We demonstrate ablation studies of \(\eta_{r}\) and \(\gamma\) in Figure 6. In Figure 6(a), we fix \(\gamma=1.0\) and study the effect of \(\eta_{r}\). The results show that SAP prunes more aggressively when \(\eta_{r}>0\). In Figure 6(b), we fix \(\eta_{r}=0.0\) and study the effect of \(\gamma\). The results show that SAP prunes more aggressively when \(\gamma>1\). Interestingly, the performance of our results roughly follows a logistic decay model due to the adaptive pruning ratio, and the inflection point corresponds to the peak of the sparsity measure. Moreover, the dynamics of the sparsity measure of SAP with various ablation studies also corroborate our hypothesis. ## 5 Conclusion We proposed a new notion of sparsity for vectors named PQ Index (PQI), which follows the principles a sparsity measure should obey. We develop a new perspective on the compressibility of neural networks by measuring the sparsity of pruned models. We postulate a hypothesis on the relationship between the sparsity and compressibility of neural networks. Motivated by our proposed PQI and hypothesis, we further develop a Sparsity-informed Adaptive Pruning (SAP) algorithm that uses PQI to choose the pruning ratio adaptively. Our experimental results demonstrate that SAP can compress more efficiently and robustly than state-of-the-art algorithms. Figure 5: Ablation studies of \(p\) and \(q\) for global pruning with CIFAR10 and CNN. Figure 6: Ablation studies of \(\eta_{r}\) and \(\gamma\) for global pruning with CIFAR10 and CNN. ## Acknowledgments This work was supported in part by the Office of Naval Research (ONR) under grant number N00014-21-1-2590.
2305.09088
The Hessian perspective into the Nature of Convolutional Neural Networks
While Convolutional Neural Networks (CNNs) have long been investigated and applied, as well as theorized, we aim to provide a slightly different perspective into their nature -- through the perspective of their Hessian maps. The reason is that the loss Hessian captures the pairwise interaction of parameters and therefore forms a natural ground to probe how the architectural aspects of CNN get manifested in its structure and properties. We develop a framework relying on Toeplitz representation of CNNs, and then utilize it to reveal the Hessian structure and, in particular, its rank. We prove tight upper bounds (with linear activations), which closely follow the empirical trend of the Hessian rank and hold in practice in more general settings. Overall, our work generalizes and establishes the key insight that, even in CNNs, the Hessian rank grows as the square root of the number of parameters.
Sidak Pal Singh, Thomas Hofmann, Bernhard Schölkopf
2023-05-16T01:15:00Z
http://arxiv.org/abs/2305.09088v1
# The Hessian perspective into the Nature of Convolutional Neural Networks ###### Abstract While Convolutional Neural Networks (CNNs) have long been investigated and applied, as well as theorized, we aim to provide a slightly different perspective into their nature -- through the perspective of their Hessian maps. The reason is that the loss Hessian captures the pairwise interaction of parameters and therefore forms a natural ground to probe how the architectural aspects of CNN get manifested in its structure and properties. We develop a framework relying on Toeplitz representation of CNNs, and then utilize it to reveal the Hessian structure and, in particular, its rank. We prove tight upper bounds (with linear activations), which closely follow the empirical trend of the Hessian rank and hold in practice in more general settings. Overall, our work generalizes and establishes the key insight that, even in CNNs, the Hessian rank grows as the square root of the number of parameters. Machine Learning, Neural Networks, Hessian Analysis, Hessian Analysis, Hessian Analysis, Hessian Analysis, Hessian Analysis ## 1 Introduction Nobody would deny that CNNs (Fukushima, 1980; LeCun et al., 1995; Krizhevsky et al., 2017), with their baked-in equivariance to translations, have played a key role in the practical successes of deep learning and computer vision. Yet several aspects of their nature are still unclear. Cutting directly to the chase, for instance, take a look at Figure 1. As the number of channels in the hidden layers increases, the number of parameters grows, as expected, quadratically. But, the rank of the loss Hessian at initialization, measured as precisely as it gets, grows at a much calmer, linear rate. How come? This question lies at the core of our work. Generally speaking, we would like to investigate the inherent 'nature' of CNNs, i.e., how its architectural characteristics manifest themselves in terms of a property or a phenomenon at hand -- here that of Figure 1, which, among other things, would precisely indicate the effective dimension of the (local) loss landscape (MacKay, 1992; Gur-Ari et al., 2018). Of course, when the likes of Transformer (Vaswani et al., 2017; Dosovitskiy et al., 2020), MLPMixer (Tolstikhin et al., 2021) and their kind, have ushered in a new wave of architecture design, especially when equipped with heaps of data, it is tempting to think that studying CNNs might not be as worthwhile an enterprise. That only time and tide can tell -- but it seems that, at least for now, the concepts and principles behind CNNs (such as patches, larger strides, weight sharing, downsampling) continue to be carried along while designing newer architectures in the 2020s (Liu et al., 2021, 2022). And, more broadly, if the perspectives and techniques that help us understand the nature of a given architecture are general enough, they may also be of relevance when considering another architecture of interest. We are inspired by one such account of an intriguing perspective: the extensive redundancy in the parameterization of fully-connected networks, as measured through the Hessian degeneracy (Sagun et al., 2016), and recently outlined rigorously in the work of (Singh et al., 2021). We aim to chart this out in detail for CNNs. Figure 1: A comparison of the number of parameters with the empirically observed loss Hessian rank for increasing number of channels \(m\) for a \(2\)-hidden layer CNN on CIFAR10. _Can we estimate the precise scaling behaviour of the Hessian rank?_ ## 2 Related Work **The Nature of CNNs.** To start with, there's the intuitive perspective of enabling a hierarchy of features (LeCun et al., 1995). A more mathematical take is that of (Bruna & Mallat, 2013) where filters are constructed as wavelets and which as a whole provide Lipschitz continuity to deformations. The approximation theory view (Poggio et al., 2015; Mao et al., 2021) reinforces the intuitive benefits of hierarchy and compositionality with explicit constructions. On the optimization side, the implicit bias perspective (Gunasekar et al., 2018) stresses on how gradient descent on CNNs leads to a particular solution. Other notable takes are from the viewpoint of Gaussian processes (Garriga-Alonso et al., 2018), loss landscapes (Nguyen & Hein, 2018; Gu et al., 2020), arithmetic circuits (Cohen & Shashua, 2016) -- to list a few. In contrast, we focus on how, _from the initialization itself_, the CNN architecture induces structural properties of the loss Hessian and by extension, of the loss landscape. **Hessian maps and Deep Learning.** The Hessian characterizes parameter interactions via second derivative of the loss. Consequently, it has been a central object of study and has been extensively utilized for applications and theory alike, both in the past as well as the present. For instance, generalization (MacKay, 1992; Keskar et al., 2016; Yang et al., 2019; Singh et al., 2022), optimization (Zhu et al., 1997; Setiono & Hui, 1995; Martens & Grosse, 2015; Cohen et al., 2021), network compression (LeCun et al., 1990; Hassibi & Stork, 1992; Singh & Alistarh, 2020), continual learning (Kirkpatrick et al., 2017), hyperparameter search (LeCun et al., 1992; Schaul et al., 2013), and more. In recent times, it has been revitalized in significant part due to the 'flatness hypothesis' (Keskar et al., 2016; Hochreiter & Schmidhuber, 1997) and in turn, flatness has become a popular method to probe the extent of generalization as it seems to consistently rank ahead of traditional norm-based complexity measures (Jiang et al., 2019) in multiple scenarios. Given the increasing size of the networks, and the inherent limitation of the Hessian being quadratic in cost, measuring flatness has almost become synonymous with measuring the top eigenvalue of the Hessian (Cohen et al., 2021) or even just the zeroth-order measurement of the loss stability in the parameter space. To a lesser extent, some works still utilize efficient approximations to the Hessian trace (Yao et al., 2020) or log determinant (Jia & Su, 2020). But largely the reliance on the Hessian for neural networks has become a black-box affair. **Understanding of the Neural Network Hessian maps.** Lately, significant advances have been made in this direction -- a few of the most prominent being: the characterization of its spectra as bulk and outliers (Sagun et al., 2016; Pennington & Bahri, 2017; Ghorbani et al., 2019), the empirically observed significant rank degeneracy (Sagun et al., 2017), and the class/cross-class structure (Papyan, 2020). Despite these advancements, its structure for neural networks primarily gets seen only up to the surface level of the chain rule 1 for a composition of functions, with a few exceptions such as (Wu et al., 2020; Singh et al., 2021). Footnote 1: This is known otherwise as the Gauss-Newton decomposition (Schraudolph, 2002b) and discussed in Eqn. 1. We take our main inspiration from the latter of these works, namely (Singh et al., 2021), where the authors precisely characterize the structure for deep fully-connected networks (FCNs) resulting in concise bounds and formulae on the Hessian rank of arbitrary sized networks. Our aim, thus, is to thoroughly exhibit the structure of the Hessian for deep convolutional networks and explore the distinctive facets that arise in CNNs -- as compared to FCNs. **Our contributions:** (1) We develop a framework to analyze CNNs that rests on a Toeplitz representation of convolution2 and applies for general deep, multi-channel, arbitrary-sized CNNs, while being amenable to matrix analysis. We then utilize this framework to unravel the Hessian structure for CNNs and provide upper bounds on the Hessian rank. Our bounds are exact for the case of 1-hidden layer CNNs, and in the general case, are of the order of square root of the trivial bounds. Footnote 2: Convolution as used in practice in deep learning, and not, say circular convolution —despite its relative theoretical ease. (2) Next, we verify our bounds empirically in a host of settings, where we find that our upper bounds remain rather close to the true empirically observed Hessian rank. Moreover, they even hold faithfully outside the confines of the theoretical setting (choice of loss and activation functions) used to derive them. (3) Further, we make a detailed comparison of the key ingredients in CNNs, i.e., local connectivity and weight sharing, in a simplified setting through the perspective of our Hessian results. We also discuss some elements of our proof technique in the hope that it helps provide a better grasp of the results. We would also like to make a quick remark about the _difference with regards to (Singh et al., 2021)_. While we borrow heavily from their approach, the framework we develop here provides us with the flexibility to handle convolutions, pooling operations, and even fully-connected layers. In particular, our analysis captures their results as a special case when the filter size is equal to the spatial dimension. We also introduce a novel proof technique relative to (Singh et al., 2021), without which one cannot attain exact bounds on the Hessian rank for the one-hidden layer case. Overall, we hope that by building on the prior work, we can further push this research direction of understanding the nature of various network architectures through an in-depth, white-box, analysis of the Hessian structure. ## 3 Setup and Background Notation.We denote vectors in lowercase bold (\(\mathbf{x}\)), matrices in uppercase bold (\(\mathbf{X}\)), and tensors in calligraphic letters (\(\mathcal{X}\)). We will denote the \(i\)-th row and \(j\)-th columns of some matrix \(\mathbf{A}\) by \(\mathbf{A}_{i\,\bullet}\) and \(\mathbf{A}_{\bullet\,j}\) respectively. We will often use the notation \(\mathbf{A}^{(i:j)}\), for \(i>j\) to indicate a sequence of matrices from \(i\) down to \(j\), i.e., \(\mathbf{A}^{(i:j)}=\mathbf{A}^{(i)}\mathbf{A}^{(i-1)}\cdots\mathbf{A}^{(j+1)} \mathbf{A}^{(j)}\). For \(i<j\), the same notation3 would mean the sequence of matrices from \(i\) up to \(j\), but _tranposed_. To express the structure of the gradients and the Hessian, we will employ matrix derivatives (Magnus and Neudecker, 2019), wherein we vectorize row-wise (\(\operatorname{vec}_{r}\)) the involved matrices and organize the derivative in Jacobian (numerator layout). Concretely, for matrices \(\mathbf{X}\in\mathbb{R}^{m\times n}\) and \(\mathbf{Y}\in\mathbb{R}^{p\times q}\), we have Footnote 3: When either of \(i\) or \(j\) are outside the bounds of a particular index set, this notation would devolve to an identity matrix. Setting.Suppose we are given an i.i.d. dataset \(S=\{(\mathbf{x}_{1},\mathbf{y}_{1}),\ldots,(\mathbf{x}_{n},\mathbf{y}_{n})\}\), of size \(|S|=n\), drawn from an unknown distribution \(p_{\mathbb{X},\mathbb{Y}}\), consisting of inputs \(\mathbf{x}\in\mathbb{X}\subseteq\mathbb{R}^{d}\) and targets \(\mathbf{y}\in\mathbb{Y}\subseteq\mathbb{R}^{K}\). Based on this dataset \(S\), consider we use a neural network to learn the mapping from the inputs to the targets, \(\mathbf{F}_{\boldsymbol{\theta}}:\mathbb{X}\mapsto\mathbb{Y}\), parameterized by \(\boldsymbol{\theta}\in\boldsymbol{\Theta}\subseteq\mathbb{R}^{p}\). To this end, we follow the framework of Empirical Risk Minimization (Vapnik, 1991), and optimize a suitable loss function \(\operatorname{L}:\Theta\mapsto\mathbb{R}\). In other words, we solve the following optimization problem, \[\boldsymbol{\theta}^{\star}=\operatorname*{argmin}_{\boldsymbol{\theta}\in \boldsymbol{\Theta}}\ \operatorname{L}(\boldsymbol{\theta})=\frac{1}{n}\sum_{i=1}^{n}\ell\left( \boldsymbol{\theta};(\mathbf{x}_{i},\mathbf{y}_{i})\right)\,,\] say with a first-order method like (stochastic) gradient descent and the choices for \(\ell\) could be mean-squared error (MSE), cross-entropy (CE), etc. Hessian.We analyze the properties of the Hessian of the loss function, \(\mathbf{H}_{\operatorname{L}}=\frac{\partial^{2}\mathbf{L}(\boldsymbol{\theta}) }{\partial\boldsymbol{\theta}\,\partial\boldsymbol{\theta}^{\top}}\), with respect to the parameters \(\boldsymbol{\theta}\). It is quite well known (Schraudolph, 2002; Sagun et al., 2017) that, via the chain rule, the Hessian can be decomposed as a sum of the following two matrices: \[\mathbf{H}_{\operatorname{L}}=\mathbf{H}_{\operatorname{O}}+ \mathbf{H}_{\operatorname{F}} =\frac{1}{n}\sum_{i=1}^{n}\nabla_{\boldsymbol{\theta}}\mathbf{F}_ {\boldsymbol{\theta}}(\mathbf{x}_{i})\big{[}\nabla_{\mathbf{F}_{\boldsymbol{ \theta}}}^{2}\,\ell_{i}\big{]}\,\nabla_{\boldsymbol{\theta}}\mathbf{F}_{ \boldsymbol{\theta}}(\mathbf{x}_{i})^{\top}\] \[+\frac{1}{n}\sum_{i=1}^{n}\,\sum_{c=1}^{K}[\nabla_{\mathbf{F}_{ \boldsymbol{\theta}}}\ell_{i}]_{c}\,\nabla_{\boldsymbol{\theta}}^{2}\, \mathbf{F}_{\boldsymbol{\theta}}^{c}(\mathbf{x}_{i}) \tag{1}\] where, \(\nabla_{\boldsymbol{\theta}}\mathbf{F}_{\boldsymbol{\theta}}(\mathbf{x}_{i}) \in\mathbb{R}^{p\times K}\) is the Jacobian of the function and \(\nabla_{\mathbf{F}_{\boldsymbol{\theta}}}^{2}\,\ell_{i}\in\mathbb{R}^{K\times K}\) is the Hessian of the loss with respect to the network function, at the \(i\)-th sample. To facilitate comparison, we refer to these two matrices as the _outer-product Hessian_ and the _functional Hessian_ following (Singh et al., 2021). ### Background Precise bounds on the Hessian rank.Despite the much interest in Hessian over the decades, the upper bounds remained quite trivial (like a factor of the number of samples) and the empirically observed degeneracy (Sagun et al., 2016), because of the need to measure rather small eigenvalues and judge a suitable threshold, remained intractable to establish. Exact upper bounds and formulae have only been made available very recently for fully-connected networks due to (Singh et al., 2021), as a result of subtle choice to have linear activations, together with a novel analysis technique adapted from matrix analysis (Matsaglia and Styan, 1974; Chuai and Tian, 2004). While they find that the presence of non-linearities like ReLU impedes a theoretical analysis, (Singh et al., 2021) thoroughly demonstrate that their bounds hold empirically for ReLU-based FCNs as well. Their empirical analysis is equally rigorous, for they compute exact Hessians (without any approximation) in Float64 precision. Overall, the surprising finding of (Singh et al., 2021) is that the Hessian rank for FCNs scales4 linearly in the total number of neurons \(m\), i.e., \(\mathcal{O}(m)\); while the number of parameters scale quadratically, \(\mathcal{O}(m^{2})\). While this concrete bound of \(\mathcal{O}(m)\) is shown at initialization, their analysis applies to any point during training -- but with resulting bound in terms of the rank of the individual weight matrices. Further, they highlight that during training, the rank can only decrease (a simple consequence of the functional Hessian \(\mathbf{H}_{\operatorname{F}}\) being driven to zero since it is scaled by the gradient of the loss \(\nabla_{\mathbf{F}_{\boldsymbol{\theta}}}\ell\)). In this sense, their upper bounds hold pointwise throughout the loss landscape. Footnote 4: More precisely, this should be \(\mathcal{O}(q\cdot m)\), however, \(q\) denotes the bottleneck dimension in the network — inclusive of input and output dimensions, and hence can be thought of as a constant. Why Hessian Rank?The finding about the growth of Hessian rank carries thought-provoking implications on the number of effective parameters within a particular architecture. Let us illustrate this by considering the Occam's factor (MacKay, 1992; Gull, 1989) which, said roughly, describes the extent to which the prior hypothesis space shrinks on observing the data. More formally, this is the ratio of posterior to prior volumes in the parameter space. This is then used to derive the following measure for the effective degrees of freedom, assuming a quadratic approx imation to the posterior: \(\sum\limits_{i=1}^{p}\frac{\lambda_{i}}{\lambda_{i}+\epsilon}\,,\) where, \(\lambda_{i}\) denotes the \(i\)-th eigenvalue of the Hessian, \(p\) is the total number of parameters, and \(\epsilon\) is the weight set upon the prior. In other words, the above measure compares the extent (\(\lambda_{i}\)) to which a particular direction (along the \(i\)-th eigenvector) in the parameter space is determined by the data relative to that determined by the prior \(\epsilon\). For small \(\epsilon\), which amounts to little or no explicit regularization towards the prior (as is often the case with deep networks in practice), this measure of the degrees of freedom approaches the Hessian rank. In a recent work (Maddox et al., 2020) empirically noted that this measure also explains double descent (Belkin et al., 2019) in neural networks. Thus, it makes it all the more pertinent5 to explore how rank of the Hessian scales with various architectural parameters in a CNN. Footnote 5: As a sidnote, we carry out a (limited) scale study where we sweep over the filter sizes and number of channels in a CNN, and find that the Hessian rank, _at initialization_, has a higher correlation coefficient (see Figure 12) with generalization error as compared to the raw count of parameters. However, an extensive study is beyond the scope of our paper. ## 4 Toeplitz Framework for CNN Hessians **Preliminaries.** In this section, we will lay out the formalism that will lie at the core of our analysis. For brevity, we will develop this in the case of 1D CNNs which are commonly employed for biomedical applications or audio data (Kiranyaz et al., 2021). One can otherwise think of applying our framework to flattened filters and input patches, and such an assumption is also prevalent in the theory literature (Kohn et al., 2021). Besides, throughout our framework we will assume there are no bias parameters, although one can simply consider homogeneous coordinates in the input. Warmup.Let's say we want to represent the convolution \(\mathbf{W}*\mathbf{x}\) of an input \(\mathbf{x}\in\mathbb{R}^{d}\) with \(m\) filters of size \(k\leq d\) that have been organized in the matrix \(\mathbf{W}\in\mathbb{R}^{m\times k}\). For now, consider that we have stride \(1\) and zero padding. We will later touch upon these aspects and see the results for strides \(>1\) in Section 7. Further, for some vector \(\mathbf{z}\in\mathbb{R}^{d}\), we will use the notation \(\mathbf{z}_{j;j+k-1}\in\mathbb{R}^{k}\) to denote the (shorter) vector formed by considering the indices \(j\) to \(j+k-1\) (both inclusive) of the original vector. The output of the above convolution can be expressed as the following matrix of shape \(m\times(d-k+1)\), \[\mathbf{W}*\mathbf{x}=\begin{pmatrix}\langle\mathbf{W}_{1\bullet},\,\mathbf{x }_{1:k}\rangle&\cdots&\langle\mathbf{W}_{1\bullet},\,\mathbf{x}_{d-k+1:d} \rangle\\ \vdots&&\vdots\\ \langle\mathbf{W}_{m\bullet},\,\mathbf{x}_{1:k}\rangle&\cdots&\langle \mathbf{W}_{m\bullet},\,\mathbf{x}_{d-k+1:d}\rangle\end{pmatrix}\,. \tag{2}\] Now, define Toeplitz6 matrices for each filter, \(\{\mathbf{T}^{\mathbf{W}_{i\bullet}}\}_{i=1}^{m}\), with \(\mathbf{T}^{\mathbf{W}_{i\bullet}}:=\mathrm{toep}(\mathbf{W}_{i\bullet},d) \in\mathbb{R}^{(d-k+1)\times d}\) such that, Footnote 6: These are matrices \(\mathbf{A}\) with \(\mathbf{A}_{ij}=\mathbf{a}_{i-j}\) formed via some underlying vector \(\mathbf{a}\) \[\mathbf{T}^{\mathbf{W}_{i\bullet}}=\begin{pmatrix}w_{i1}&\cdots&w_{ik}&0& \cdots&0\\ 0&w_{i1}&\cdots&w_{ik}&0&\vdots\\ \vdots&0&\ddots&\ddots&\ddots&0\\ 0&\cdots&0&w_{i1}&\cdots&w_{ik}\end{pmatrix}\,.\] Note, the above representation of the Toeplitz matrix also depends on the base dimension (here, \(d\)) where the given vector must be 'toeplitzed', i.e., circulated in the above fashion. But we will omit specifying this unless necessary. Let us also denote the matrix formed by stacking the \(\mathbf{T}^{\mathbf{W}_{i\bullet}}\) matrices in a row-wise fashion as \[\mathbf{T}^{\mathbf{W}}:=\begin{pmatrix}\mathbf{T}^{\mathbf{W}_{1\bullet}}\\ \vdots\\ \mathbf{T}^{\mathbf{W}_{m\bullet}}\end{pmatrix}\in\mathbb{R}^{m(d-k+1)\times d }\,.\] We can now see that the above matrix, \(\mathbf{T}^{\mathbf{W}}\), when multiplied by the input \(\mathbf{x}\) gives us the output of the convolution operation in Eqn. (2) when vectorized row-wise, i.e., \[\mathrm{vec}_{r}(\mathbf{W}*\mathbf{x})=\mathbf{T}^{\mathbf{W}}\mathbf{x}\,.\] ### Toeplitz representation of deep CNNs Now, let us assume we have \(L\) hidden layers, each of which is a convolutional kernel. Hence, the parameters of the \(l\)-th layer are denoted by the tensor \(\mathcal{W}^{(l)}\in\mathbb{R}^{m_{l}\times m_{l-1}\times k_{l}}\), where \(m_{l}\) represent the number of output channels, \(m_{l-1}\) the number of input channels, and \(k_{l}\) the kernel size at this layer. As we assume a one-dimensional input, without loss of generality, we can set the number of input channels \(m_{0}=1\). In other words, they are already assumed to be flattened when passing the input of dimension \(d_{0}:=d\) into the network. The spatial dimension after being convolved with the \(l\)-th layer is denoted by \(d_{l}=d_{l-1}-k_{l}+1\) (which is basically the number of hops we can make with the given kernel over its respective input), since we have stride \(1\) and zero padding. Assume, say the ReLU nonlinearity \(\sigma(x):=\max(x,0)\). The network function can then be formally represented as (although it will be actually defined through Eqn. (3) later): \[\mathrm{F}_{\boldsymbol{\theta}}(\mathbf{x})=\mathcal{W}^{(L+1)}*\sigma( \mathcal{W}^{(L)}*\sigma(\cdots*\sigma(\mathcal{W}^{(1)}*\mathbf{x})))\,.\] As before, we would like to express the above function in terms of a sequence of appropriate Toeplitz matrix products. Unlike the warmup scenario, the convolutional kernels in this general case will be tensors. The key idea is to do a column-wise stacking of individual Toeplitz matrices across the input channels while maintaining the row-wise stacking, as before, across the output channels. First, we need to introduce a notation about indexing the fibres of a tensor \(\mathcal{W}^{(l)}\in\mathbb{R}^{m_{l}\times m_{l-1}\times k_{l}}\). Say we need the fibre going in to the plane, across the third mode. Then, the \((i,j)\)-th fibre is denoted by \(\mathcal{W}^{(l)}_{(i,j)}\bullet\in\mathbb{R}^{k_{l}}\), whose associated Toeplitz matrix will be \(\mathbf{T}^{\mathcal{W}^{(l)}_{(i,j)}\bullet}:=\mathrm{toep}(\mathcal{W}^{(l) }_{(i,j)}\bullet,d_{l-1})\in\mathbb{R}^{d_{l}\times d_{l-1}}\). Finally, the Toeplitz matrix associated with the entire \(l\)-th convolutional layer \(\mathcal{W}^{(l)}\), for which we use the shorthand \(\mathbf{T}^{(l)}\in\mathbb{R}^{m_{l}d_{l}\times m_{l-1}d_{l-1}}\), can be expressed as: \[\mathbf{T}^{(l)}:=\begin{pmatrix}\mathbf{T}^{\mathcal{W}^{(l)}_{(1,1)}\bullet }&\dots&\mathbf{T}^{\mathcal{W}^{(l)}_{(1,m_{l-1})}\bullet}\\ \vdots&&\vdots\\ \mathbf{T}^{\mathcal{W}^{(l)}_{(m_{l},1)}\bullet}&\dots&\mathbf{T}^{\mathcal{ W}^{(l)}_{(m_{l},m_{l-1})}\bullet}\end{pmatrix}\,.\] In other words, the Toeplitzed representation \(\mathbf{T}^{(l)}\) consists of \(m_{l}\cdot m_{l-1}\) many Toeplitz blocks formed by the vector \(\mathcal{W}^{(l)}_{(i,j)}\bullet\in\mathbb{R}^{k_{l}}\) of size \(d_{l}\times d_{l-1}\). The output (spatial) dimension \(d_{L+1}\) is typically 1 (corresponding to \(k_{L+1}=d_{L}\)), and the number of output channels for the last layer \(m_{l+1}=K\) where \(K\) is the number of targets. Now, the network function can be written in the general case as: \[\mathbf{F}_{\boldsymbol{\theta}}(\mathbf{x})=\mathbf{T}^{(L+1)}\Lambda^{(L)} \mathbf{T}^{(L)}\cdots\Lambda^{(1)}\mathbf{T}^{(1)}\,\mathbf{x}\,, \tag{3}\] where, \(\Lambda^{(i)}\) is an input-dependent diagonal matrix that contains a \(1\) or \(0\), based on whether the neuron was activated or not. As the rank analysis of (Singh et al., 2021) requires linear activations, we will follow the same course. However, we can expect that this should still let us contrast the distinctive facets of CNN relative to a FCN. Anyways, later we will elaborate on the case of nonlinearities. Besides, we will refer to the above network function, more concisely as \(\mathbf{T}^{(L+1:1)}\mathbf{x}\). **Remark.** As evident, the fact that a convolution of a set of vectors can be expressed as a matrix-vector product with a suitable Toeplitz matrix is rather straightforward. Also, a Toeplitz representation for CNN is not new -- works from approximation theory incorporate a similar formalism (Zhao et al., 2017; Fang et al., 2020). However, unlike past work (Sedghi et al., 2018; Kohn et al., 2021), we develop our framework in a way that doesn't overlook how convolutions are prominently used, i.e., without circular convolution, with multiple channels, and possibly unequal-sized layers. ### Matrix derivatives of Toeplitz representations For the gradient and Hessian calculations, we make use of matrix derivatives and the corresponding chain rule. Thus we frequently compute the gradient of the Toeplitz representation \(\mathbf{T}^{(l)}\) with respect to the suitably matricized convolutional tensor, \(\mathrm{mat}\,\mathcal{W}^{(l)}\). In order to be consistent with the form of \(\mathbf{T}^{(l)}\), we define our matricization of \(\mathcal{W}^{(l)}\) as: \[\mathrm{mat}\,\mathcal{W}^{(l)}:=\begin{pmatrix}\mathcal{W}^{(l)}_{(1,1)}\bullet &\dots&\mathcal{W}^{(l)}_{(m_{l},1)}\bullet\\ \vdots&&\vdots\\ \mathcal{W}^{(l)}_{(1,m_{l-1})}\bullet&\dots&\mathcal{W}^{(l)}_{(m_{l},m_{l-1} )}\bullet\end{pmatrix}^{\top}\,. \tag{4}\] where the matricization \(\mathrm{mat}\,\mathcal{W}^{(l)}\in\mathbb{R}^{m_{l}\times m_{l-1}k_{l}}\) and we will use the notation \(\mathbf{W}^{(l)}:=\mathrm{mat}\,\mathcal{W}^{(l)}\) as a shorthand. Essentially, we have arranged each of the mode-\(3\) fibres as rows in the output channels times input channels format. The following lemma equips us with the way to carry this out (the proof can be found in Section A.1 of the Appendix). **Lemma 1**.: _The matrix derivative of \(\mathbf{T}^{(l)}\) with respect to \(\mathbf{W}^{(l)}\), is given as follows:_ \[\widetilde{\mathbf{Q}}^{(l)}:=\frac{\partial\mathbf{T}^{(l)}}{\partial \mathbf{W}^{(l)}}:=\frac{\partial\,\mathrm{vec}_{r}\,\mathbf{T}^{(l)}}{ \partial\left(\mathrm{vec}_{r}\,\mathbf{W}^{(l)}\right)^{\top}}=\mathbf{I}_{ m_{l}}\,\otimes\,\mathbf{Q}^{(l)}\,.\] The particular structure of \(\mathbf{Q}^{(l)}\) is a bit complex, involving various permutation matrices. So, for simplicity, we abstract it out here in the main text. ### CNN Hessian Structure Like (Singh et al., 2021), for our theoretical analysis, we will consider the case of MSE loss. But the results still hold empirically, say, for CE loss. Let's now have a glance at the \(kl\)-th block, \(k\leq l\) for both the outer-product and the functional Hessian (the derivations are in Section B). This corresponds to looking at the submatrix of the Hessian corresponding to the \(k\)-th and \(l\)-th convolutional parameter tensors.7 Let's start with the outer-product Hessian \(\mathbf{H}_{\mathrm{O}}\). Footnote 7: In the case of \(\mathbf{H}_{\mathrm{F}}\), the present form is for \(k\leq l\). For, \(k\geq l\), it’s a bit different and detailed in the Appendix. **Proposition 2**.: _The \(kl\)-th block of \(\mathbf{H}_{\mathrm{O}}\) is,_ \[\mathbf{H}_{\mathrm{O}}^{(kl)}=\widetilde{\mathbf{Q}}^{(k)\top} \bigg{(}\mathbf{T}^{(k+1:L+1)}\mathbf{T}^{(L+1:l+1)}\,\otimes\\ \mathbf{T}^{(k-1:1)}\mathbf{\Sigma}_{\mathbf{xx}}\mathbf{T}^{(1:l -1)}\bigg{)}\widetilde{\mathbf{Q}}^{(l)}\,. \tag{5}\] **A word about \(\mathbf{H}_{\mathrm{O}}\).** We should also emphasize that the outer-product Hessian shares exactly the same non-zero spectrum as the Neural Tangent Kernel (Jacot et al., 2018), or roughly up to scaling, in the case of Fisher Information (Amari, 1998), empirical Fisher (Kunstner et al., 2019)). Moving on to the functional Hessian \(\mathbf{H}_{\mathrm{F}}\), denote \(\boldsymbol{\Omega}=\mathbf{E}\,[\boldsymbol{\delta}_{\mathbf{x},\mathbf{y}}\, \mathbf{x}^{\top}]\in\mathbb{R}^{K\times d_{\mathrm{o}}}\) as the (uncentered) covariance of the residual \((\mathbf{y}_{i}-\mathbf{F}_{\boldsymbol{\theta}}(\mathbf{x}_{i}))\) with the input. Then we have, **Proposition 3**.: _The \(kl\)-th block of \(\mathbf{H}_{\mathrm{F}}\) is:_ \[\mathbf{H}_{\mathrm{F}}^{(kl)}=\left(\mathbf{I}_{m_{k}}\otimes \mathbf{Q}^{(k)\top}\right)\left(\mathbf{T}^{(k+1:l-1)}\otimes\right.\\ \left.\mathbf{T}^{(k-1:1)}\mathbf{\Omega}^{\top}\mathbf{T}^{(L+1 :l+1)}\right)\left(\mathbf{Q}^{(l)}\otimes\mathbf{I}_{m_{l}}\right)\,, \tag{6}\] **A word about \(\mathbf{H}_{\mathrm{F}}\).** As the outer-product Hessian is positive semi-definite, the functional Hessian is the source of all the negative eigenvalues of the Hessian and is important for optimization as there may be numerous saddles in the landscape (Dauphin et al., 2014). It also has a very peculiar block-hollow structure (i.e., zero diagonal blocks), which leads to the number of negative eigenvalues being approximately half of its rank (c.f., (Singh et al., 2021)). ## 5 Key results on the CNN Hessian Rank Finally, we can now present our key results. A quick note about assumptions: for simplicity, assume that the (uncentered) input-covariance \(\mathbf{\Sigma}_{\mathbf{xx}}=\mathrm{cov}(\mathbf{x})\) has full rank \(\mathrm{rank}\left(\mathbf{\Sigma}_{\mathbf{xx}}\right):=r=d\). We will analyze the ranks of the outer product and functional Hessian, later combining them to yield a bound on the rank of the loss Hessian. This is without loss of generality for one can always pre-process the input to ensure that this is the case, and results of (Singh et al., 2021) hold for \(r\neq d\) with appropriate modifications. **Outer-Product Hessian \(\mathbf{H}_{\mathrm{O}}\).** From the structure of the \(kl\)-th block in Eqn. (5), it's easy to see to arrive at the following proposition: **Proposition 4**.: _For a deep linear convolutional network, \(\mathbf{H}_{\mathrm{O}}=\mathbf{Q}_{o}^{\top}\mathbf{A}_{o}\mathbf{B}_{o} \mathbf{A}_{o}^{\top}\mathbf{Q}_{o}\), where \(\mathbf{B}_{o}=\mathbf{I}_{K}\otimes\mathbf{\Sigma}_{\mathbf{xx}}\in\mathbf{ T}^{(L+1:2)}\otimes\mathbf{I}_{d}\)_ \[\mathbb{R}^{Kd\times Kd}\text{, }\mathbf{A}_{o}^{\top}=\begin{pmatrix} \cdot\\ \cdot\\ \mathbf{T}^{(L+1:l+1)}\otimes\mathbf{T}^{(1:l-1)}\\ \cdot\\ \cdot\\ \mathbf{I}_{K}\otimes\mathbf{T}^{(1:L)}\end{pmatrix}\in\mathbb{R}^{Kd\times \widehat{p}}\,,\] _and \(\mathbf{Q}_{o}=\mathrm{diag}\left(\widehat{\mathbf{Q}}^{(1)},\cdot\cdot, \widehat{\mathbf{Q}}^{(L+1)}\right)\in\mathbb{R}^{\widehat{p}\times p}\) where \(\mathrm{diag}(\cdot)\) denotes a block-diagonal matrix._ Besides, \(p=\sum_{i=1}^{L+1}m_{l}m_{l-1}k_{l}\) and \(\widehat{p}=\sum_{i=1}^{L+1}m_{l}m_{l-1}d_{l}d_{l-1}\). The former denotes the number of parameters in the CNN while the latter is the number of parameters in the 'Toeplitzed' fully-connected network. Our first key result, the proof of which is located in Section C.1 of the Appendix, can then be described as follows : **Theorem 5**.: _The rank of the outer-product Hessian is upper bounded as_ \[\mathrm{rk}(\mathbf{H}_{\mathrm{O}})\leq\min\left(p,d_{0}\, \mathrm{rk}(\mathbf{T}^{(2:L+1)})+K\,\mathrm{rk}(\mathbf{T}^{(L:1)})-\right.\\ \left.\mathrm{rk}(\mathbf{T}^{(2:L+1)})\,\mathrm{rk}(\mathbf{T}^{(L :1)})\right)\\ =\min\left(p,q\left(d_{0}+K-q\right)\right)\,.\] _Here, \(q:=\min(d_{0},m_{1}d_{1},\cdots,m_{L}d_{L},K)\)._ Assuming no bottleneck layer, we will have that \(q=\min(d,K)\), and resulting in \(\mathrm{rk}(\mathbf{H}_{\mathrm{O}})\leq Kd_{0}\). Functional Hessian \(\mathbf{H}_{\mathrm{F}}\).Our approach here will be similar to that in the Theorem above. We will try to factor out all the \(\mathbf{Q}^{(l)}\) matrices and then analyze the rank of the resulting decomposition. But, this requires more care as the form of the \(kl\)-th block is different depending on \(k\leq l\) or not. **Theorem 6**.: _For a deep linear convolutional network, the rank of \(l\)-th column-block, \(\widehat{\mathbf{H}}_{\mathrm{F}}^{\bullet l}\), of the matrix \(\widehat{\mathbf{H}}_{\mathrm{F}}\), can be upper bounded as_ \[\mathrm{rk}(\widehat{\mathbf{H}}_{\mathrm{F}}^{\bullet L})\leq\min(\widehat{q} \,m_{l-1}d_{l-1}+\widehat{q}\,m_{l}d_{l}-\widehat{q}^{\,2}\,,m_{l}m_{l-1}k_{l})\,,\] _for \(l\in[2,\cdots,L]\,.\) When \(l=1\), we have_ \[\mathrm{rk}(\widehat{\mathbf{H}}_{\mathrm{F}}^{\bullet 1})\leq\min(\widehat{q}\,m_{1}d_ {1}+\widehat{q}\,s-\widehat{q}^{\,2}\,,m_{1}m_{0}k_{1})\,.\] _And, when \(l=L+1\), we have_ \[\mathrm{rk}(\widehat{\mathbf{H}}_{\mathrm{F}}^{\bullet L+1})\leq\min(\widehat{q} \,m_{L}d_{L}+\widehat{q}\,s-\widehat{q}^{\,2}\,,m_{L+1}m_{L}k_{L+1})\,.\] _Here, \(\widehat{q}:=\min(d_{0},m_{1}d_{1},\cdots,m_{L}d_{L},K,s)=\min(q,s)\) and \(s:=\mathrm{rk}(\mathbf{\Omega})=\mathrm{rk}(\mathbf{E}\left[\boldsymbol{\delta }_{\mathbf{x},\mathbf{y}}\,\mathbf{x}^{\top}\right])\)._ The proof is located in Section C.2 of the Appendix. The upper bound on the rank of \(\mathbf{H}_{\mathrm{F}}\) follows by summing the above result over all the columns, \(\mathrm{rk}(\mathbf{H}_{\mathrm{F}})\leq\sum\limits_{l=1}^{L+1}\mathrm{rk}( \widehat{\mathbf{H}}_{\mathrm{F}}^{l})\) by the sub-additivity of rank. A remarkable empirical observation, like in the case of FCNs, is that the block-columns are mutually orthogonal -- hence, we don't loose anything by simply summing the ranks of the block columns. **Loss Hessian \(\mathbf{H}_{\mathrm{L}}\).** One can then bound the rank of the loss Hessian simply as, \(\mathrm{rk}(\mathbf{H}_{\mathrm{L}})\leq\mathrm{rk}(\mathbf{H}_{\mathrm{O}})+ \mathrm{rk}(\mathbf{H}_{\mathrm{F}})\). Also, we can infer that, in the likely case where \(q=\min(K,d_{0})\), rank of the loss Hessian grows linearly with number of channels, i.e., \(\mathcal{O}(m\cdot L\cdot d_{0})\), while the number of parameters grow quadratically in the number of channels, i.e., \(\mathcal{O}(m^{2}\cdot L\cdot d_{0})\). Thereby, we confirm that like FCNs, a similar linear trend also holds for CNNs (and hence the Figure 1). Besides, for very large networks, \(m\) will be the dominating factor and we can infer that rank will show a square root behaviour relative to the number of parameters. Hence, we generalize the key finding of (Singh et al., 2021) in the fully-connected case to the case of convolutional neural networks. ### Exact results for one-hidden layer case While the above bounds are much tighter than any existing bounds, we now try to understand how tight our bounds are by looking at the case of a 1-hidden layer: **Theorem 7**.: _The rank of the outer product Hessian can be bounded as: \(\operatorname{rank}\left(\mathbf{H}_{\mathrm{O}}\right)\leq\min\left(Kd_{0},q( k_{1}+K(d_{0}-k_{1}+1)-q)\right)\), where \(q=\min(k_{1},K(d_{0}-k_{1}+1),m_{1})\)._ **Theorem 8**.: _The rank of the functional Hessian is bounded as: \(\operatorname{rank}\left(\mathbf{H}_{\mathrm{F}}\right)\leq 2\min\left(k_{1},K(d_{0}- k_{1}+1)\right)m_{1}\)._ The strategy behind the proofs (section C.3) is to write the CNN as a superposition of the functions that act on different input patches. Now, we must also utilize the form of Toeplitz derivatives and involved auxiliary permutation matrices. In terms of the results, interestingly, we find that now the filter size \(k_{1}\) has entered inside the \(\min\) terms in each of the bounds; thereby further reducing the rank. However, as shown ahead, for larger networks with many channels \(m\), we find that our earlier upper bound still fares decently. ## 6 Empirical Verification Verification of upper bounds.We empirically validate our upper bounds in a variety of settings, in particular, with both linear and ReLU activations, MSE and CE loss, as well as on datasets such as CIFAR10, FashionMNIST, MNIST, and a synthetic dataset. However, given the page constraints, we only show a selection of the plots, while the rest can be found in the Appendix D. Following (Singh et al., 2021), to rigorously illustrate the match with our bounds, we compute the exact Hessians, without approximations, and in Float64 precision. These precautions are taken to avoid any imprecision in calculating the rank, since the boundary of non-zero eigenvalues with zero can be otherwise a bit blurry. Rank vs number of channels.We begin by illustrating the trend of our general upper bounds in Figure 1(a) for the case of 2-hidden layer CNN on CIFAR10 and is thus the counterpart to the Figure 1 presented before. The upper bound is relatively close to the true rank and similarly shows a linear trend with the number of channels. Rank vs Filter Size.Next, in Figure 1(b), we demonstrate the match of our exact bound with the rank as observed empirically. We hold the number of channels as fixed and vary the filter size. First of all, here the markers and the lines indeed coincide for the functional Hessian and outer-product Hessian. On the other hand, the loss Hessian which we bound as the sum of the ranks of \(\mathbf{H}_{\mathrm{O}}\) and \(\mathbf{H}_{\mathrm{F}}\) forms a canopy over all the empirically observed values of the rank across the filter sizes. The upper bound itself is also quite close; it becomes even closer for large values of \(m\), see Section D.3.1). The case of non-linearities.Further, in a similar comparison across filter sizes, we showcase that our bounds which were derived for linear activations still remain as valid upper bounds and form a canopy as seen in Figure 1(c). ## 7 Locality and Weight-sharing We would like to do a little deep dive and explore a simplified setting, in order to gather some intuition. We study the two crucial components behind CNN, namely, local connectivity (i.e, the localized patches of the input are convolved with a possibly distinct filter) and weight sharing (where the same filter is applied to each of these patches). Setting.Concretely, let us begin by considering the case of a one-hidden layer neural network (with linear activations for simplicity). We now try to get rid of the layer indices for the sake of clarity. So the input dimension is \(d\), output dimension is \(K\), filter size is \(k\), number of hidden layer filters is \(m\). Further, we consider non-overlapping filters, like in MLP-Mixer (Tolstikhin et al., 2021). In other words, the stride is set equal to the filter size \(k\) and so the number of patches under consideration will be \(t=d/k\), and to avoid unnecessary Figure 2: Trend of the upper bound on the (loss) Hessian rank, compared to the true rank, and the number of parameters. In other subfigures, we see the rank of the functional Hessian and outer-product Hessian. complexity, we assume \(d\) is an integer multiple of \(k\). Locally Connected Networks (LCNs).As the filters that get applied on the patches are distinct, let us denote them by \(\mathbf{W}^{(ii)}\in\mathbb{R}^{m\times k}\) and the output layer matrix as \(\mathbf{V}:=\left(\mathbf{V}^{(1)}\quad\cdots\quad\mathbf{V}^{(t)}\right)\), with \(\mathbf{V}^{(i)}\in\mathbb{R}^{K\times m}\), where the superscript denotes the index of the local patch upon which they get applied. Mathematically we can represent the resulting neural network function as (where \(\mathbf{x}^{(i)}\in\mathbb{R}^{k}\) denotes the \(i\)-th chunk of the input): \[\left(\mathbf{V}^{(1)}\quad\cdots\quad\mathbf{V}^{(t)}\right)\,\,\begin{pmatrix} \mathbf{W}^{(11)}&\cdots&\mathbf{0}\\ \vdots&\ddots&\vdots\\ \mathbf{0}&\cdots&\mathbf{W}^{(tt)}\end{pmatrix}\,\,\begin{pmatrix}\mathbf{x} ^{(1)}\\ \vdots\\ \mathbf{x}^{(t)}\end{pmatrix}\] We indexed the local weight matrices of the first layer as \(\mathbf{W}^{(ii)}\) to reflect that it is \((i,i)\)-th block on the diagonal. After carrying out the block matrix multiplication, the above formulation can also be expressed as: \[\mathrm{F}^{\text{LCN}}_{\boldsymbol{\theta}}(\mathbf{x})=\sum_{i=1}^{t} \mathbf{V}^{(i)}\,\mathbf{W}^{(ii)}\,\mathbf{x}^{(i)}\,. \tag{7}\] We can then easily bring to our minds the case of fully-connected networks which will also contain the off-diagonal blocks and would be represented as: \[\mathrm{F}^{\text{FCN-large}}_{\boldsymbol{\theta}}(\mathbf{x})=\sum_{i,j=1}^{t }\mathbf{V}^{(i)}\,\mathbf{W}^{(ij)}\,\mathbf{x}^{(j)}\,.\] Clearly, fully-connected networks form a generalization of locally-connected networks. Another point of comparison for LCNs in Eq. (7) with FCNs is that the former may be viewed as a superposition of \(s\) distinct _smaller_ FCNs acting on disjoin patches of the input.8 Footnote 8: This superposition point of view would hold even if we had a non-linearity, and so do our empirical results. \[\mathrm{F}^{\text{LCN}}_{\boldsymbol{\theta}}(\mathbf{x})=\sum_{i=1}^{t} \,\mathrm{F}^{\text{FCN-small}}_{\boldsymbol{\theta}}\big{(}\mathbf{x}^{(i)} \,;\,\{\mathbf{V}^{(i)},\mathbf{W}^{(ii)}\}\big{)}\,.\] In this scenario of LCNs, we get the following bounds on the rank of the outer-product and functional Hessian: **Theorem 9**.: _For the locally connected network as described in Eqn. (7), the rank of the outer-product and the functional Hessian can be upper bounded as follows: \(\mathrm{rk}(\mathbf{H}^{\text{LCN}}_{\mathrm{O}})\leq t\cdot q(k+K-q)\,,\) and \(\mathrm{rk}(\mathbf{H}_{\mathrm{F}}^{\text{LCN}})\leq t\cdot 2m\min(k,K)\), where \(q=\min(k,K,m)\) and \(t=d/k\)._ The proof is located in Section C.4. The neat thing about the above rank expressions is that they are identical to that obtained for the smaller fully-connected network, except where we change the input dimension from \(d\to k\) and scale the bounds by a factor of \(t=d/k\) (the number of smaller fully-connected that are being superpositioned). In short, even though these weight matrices must 'act' together in the loss, their contributions to the Hessian came out individually (i.e, \(\mathbf{H}_{\mathrm{F}}\) and \(\mathbf{H}_{\mathrm{O}}\) can be factorized as block diagonals). Incorporating Weight Sharing (WS).Now all the weight matrices in the first layer are shared, i.e., \(\mathbf{W}^{(ii)}=\mathbf{W}\). \[\left(\mathbf{V}^{(1)}\quad\cdots\quad\mathbf{V}^{(t)}\right)\,\,\begin{pmatrix} \mathbf{W}&\cdots&\mathbf{0}\\ \vdots&\ddots&\vdots\\ \mathbf{0}&\cdots&\mathbf{W}\end{pmatrix}\,\,\begin{pmatrix}\mathbf{x}^{(1)} \\ \vdots\\ \mathbf{x}^{(t)}\end{pmatrix} \tag{8}\] **Theorem 10**.: _For the locally connected network with weight sharing defined in Eqn. (8), the rank of the outer-product and the functional Hessian can be bounded as: \(\mathrm{rk}(\mathbf{H}^{\text{LCN+WS}}_{\mathrm{O}})\leq q(k+Kt-q)\,,\) and \(\mathrm{rk}(\mathbf{H}_{\mathrm{F}}^{\text{LCN+WS}})\leq 2m\min(k,Kt)\), where \(q=\min(k,Kt,m)\) and \(t=d/k\)._ The proof can be found in Section C.5, but the intuition is that the weight matrix \(\mathbf{W}\) is shared across the \(\mathbf{V}^{(i)}\), resulting in the intersection of their column space inside the Hessian. Hence, the rank shrinks for both \(\mathbf{H}_{\mathrm{O}}\), \(\mathbf{H}_{\mathrm{F}}\). Comparing the \(\mathrm{rk}(\mathbf{H}_{\mathrm{F}})\) bounds, we see that here \(t\) slides inside the minimum, but only in the second term (i.e., \(K\)). Lastly, in Figure 3, we present an illustration of the trend of the rank with increasing filter size for both LCN and the LCN + WS variant. Besides, the interesting scaling behaviour, this figure also validates our theoretical bounds. ## 8 Conclusion All in all, we have illustrated how the key ingredients of CNNs, such as local connectivity and weight sharing, as well as architectural aspects like filter size, strides, and number of channels get manifested through the Hessian structure and rank. Moreover, we can utilize our Toeplitz Figure 3: Hessian rank: LCN vs CNN, (Linear, CIFAR10). representation framework to deliver tight upper bounds in the general case of deep convolutional networks and generalize the recent finding of (Singh et al., 2021) about square root growth of rank relative to parameter count for CNNs as well. Looking ahead, our work raises some very interesting questions: (a) Is the growth of rank as a square root in terms of the number of parameters a universal characteristic of all deep architectures? Including Transformers? Or are there some exceptions to it? (b) Given the uncovered structure of the Hessian in CNNs, there are also questions about understanding which parts of the architecture affect the spectrum more -- normalization or pooling layers? (c) On the application side, it would be interesting to see if we can use our results to understand the approximation quality of existing pre-conditioners such as K-FAC or come up with better ones, given the rich properties of Toeplitz matrices. ## Acknowledgements We would like to thank Max Daniels and Gregor Bachmann for their suggestions as well as rest of DALab for useful comments and support. Sidak Pal Singh would also like to acknowledge the financial support from Max Planck ETH Center for Learning Systems and the travel support from ELISE (GA no 951847).
2304.10828
Individual Fairness in Bayesian Neural Networks
We study Individual Fairness (IF) for Bayesian neural networks (BNNs). Specifically, we consider the $\epsilon$-$\delta$-individual fairness notion, which requires that, for any pair of input points that are $\epsilon$-similar according to a given similarity metrics, the output of the BNN is within a given tolerance $\delta>0.$ We leverage bounds on statistical sampling over the input space and the relationship between adversarial robustness and individual fairness to derive a framework for the systematic estimation of $\epsilon$-$\delta$-IF, designing Fair-FGSM and Fair-PGD as global,fairness-aware extensions to gradient-based attacks for BNNs. We empirically study IF of a variety of approximately inferred BNNs with different architectures on fairness benchmarks, and compare against deterministic models learnt using frequentist techniques. Interestingly, we find that BNNs trained by means of approximate Bayesian inference consistently tend to be markedly more individually fair than their deterministic counterparts.
Alice Doherty, Matthew Wicker, Luca Laurenti, Andrea Patane
2023-04-21T09:12:14Z
http://arxiv.org/abs/2304.10828v1
# Individual Fairness in Bayesian Neural Networks ###### Abstract We study Individual Fairness (IF) for Bayesian neural networks (BNNs). Specifically, we consider the \(\epsilon\)-\(\delta\)-individual fairness notion, which requires that, for any pair of input points that are \(\epsilon\)-similar according to a given similarity metrics, the output of the BNN is within a given tolerance \(\delta>0\). We leverage bounds on statistical sampling over the input space and the relationship between adversarial robustness and individual fairness to derive a framework for the systematic estimation of \(\epsilon\)-\(\delta\)-IF, designing _Fair-FGSM_ and _Fair-PGD_ as global, fairness-aware extensions to gradient-based attacks for BNNs. We empirically study IF of a variety of approximately inferred BNNs with different architectures on fairness benchmarks, and compare against deterministic models learnt using frequentist techniques. Interestingly, we find that BNNs trained by means of approximate Bayesian inference consistently tend to be markedly more individually fair than their deterministic counterparts. ## 1 Introduction Deep learning models have achieved state-of-the-art performance in a wide variety of tasks (Goodfellow et al., 2016). Several calls for caution have, however, recently been raised about their deployment in tasks where fairness is of concern (Barocas and Selbst, 2016). In fact, Neural Networks (NNs) have been found to reinforce negative biases from sensitive datasets (Bolukbasi et al., 2016), discriminating against individuals on the basis of attributes such as gender or race. To address this, research efforts have been directed at both measuring the fairness of NNs, their de-biased training, as well as defining precise notions of fairness. Given a BNN \(f\) and a similarity metric between individuals \(d_{fair}\), which encodes a task-dependent notion of similarity (Ilvento, 2020), _Individual Fairness_ (IF) enforces that all pairs of _similar_ individuals in the input space get treated similarly by \(f\)(Dwork et al., 2012). As opposed to the statistical nature of _group fairness_(Mehrabi et al., 2021), IF aims at computing worst-case bias measures over a model input space. Consequently, albeit being defined on the full input space and over a fairness similarity metric, because of its worst-case nature, IF has been linked to adversarial robustness (Yeom and Fredrikson, 2021). As it has recently been show that Bayesian Neural Networks (BNNs) have a tendency to be less fragile to adversarial attacks than their frequentist counter-parts (Carbone et al., 2020), it is natural to wonder whether approximate Bayesian inference may also have a positive impact over the IF of a neural network. However, to the best of our knowledge, no work has been conducted along these lines of inquire. In this paper, we investigate the IF of BNNs and empirically evaluate it on various benchmarks. While exact computations of IF in BNNs is infeasible due to their non-convexity, we exploit the relationship between IF and adversarial robustness (Yeom and Fredrikson, 2021; Benussi et al., 2022) to develop a framework for the adaptation of adversarial attack methods for IF. In particular, we explicitly instantiate _Fair-FGSM_ and _Fair-PGD_ as extensions of their corresponding adversarial attacks (Goodfellow et al., 2015; Athalye et al., 2018) by employing gradient steps modifications and projections specific to \(d_{fair}\) metrics commonly used in the fairness literature. Furthermore, while attack methods estimate IF locally around a given individual, we use concentration inequalities (Boucheron et al., 2004) to statistically bound the worst-case expected IF over all pairs of individuals. We perform an empirical evaluation of IF in BNNs on a selection of benchmarks, including the Adult dataset (Dua and Graff, 2017) and Folktables (Ding et al., 2021), and on a variety of different architectural, approximate Bayesian inference, and similarity metric parameters. We compare the results obtained by BNNs with those obtained by deterministic NNs and deep ensembles. We find that BNNs consistently outperform their deterministic counterparts in terms of individual fairness. That is, albeit still learning certain biases, all things being equal, BNNs show a tendency to be _fairer_ than their deterministic counterparts. We finish the paper with an empirical analysis inquiring this interesting property and a discussion on why approximate Bayesian inference may lead to fairer prediction models. ## 2 Individual Fairness for BNNs We consider a NN \(f^{w}:X\rightarrow\mathbb{R}^{m}\) trained on a dataset \(\mathcal{D}=\{(x_{i},y_{i})\}_{i=1}^{N}\) composed by \(N\) points, where \(w\) is the aggregated vector of weights and biases, \(X\subset\mathbb{R}^{n}\) is the input space, and \(n\) and \(m\) are respectively the input and output dimensions. For simplicity, we focus on the case of binary classification, however, our results can be trivially extended to the regression and multi-class case. In a Bayesian setting, \(w\) is sampled from a random variable \(\mathbf{w}\) whose posterior distribution is obtained by application the Bayes rule: \(p(w|\mathcal{D})\propto p(\mathcal{D}|w)p(w)\), where \(p(w)\) is the prior and \(p(\mathcal{D}|w)\) is the likelihood. The posterior predictive distribution \(\pi\) over an input \(x\) is then obtained by averaging the posterior distribution over a sigmoid function \(\sigma\), i.e., \(\pi(x)=\int\sigma(f^{w}(x))p(w|\mathcal{D})dw.\) To define individual fairness for BNNs, we adopt the \(\epsilon\)-\(\delta\)-IF (John et al., 2020), which depends on a similarity metric \(d_{fair}:X\times X\rightarrow\mathbb{R}_{\geq 0}\) encapsulating task-dependent information about sensitive attributes. **Definition 1**: _Given \(\epsilon\geq 0\) and \(\delta\geq 0\), we say that a BNN \(f^{\mathbf{w}}\) is \(\epsilon\)-\(\delta\)-individually fair iff \(\forall x^{\prime},x^{\prime\prime}\in Xs.t.\)\(d_{fair}(x^{\prime},x^{\prime\prime})\leq\epsilon\implies|\pi(x^{\prime})-\pi(x^{ \prime\prime})|\leq\delta,\) that is, if the predictive distribution of \(\epsilon\)-similar individuals are \(\delta\)-close to each other._ Notice that while related to notion of adversarial robustness for BNNs (Cardelli et al., 2019; Wicker et al., 2020), Definition 1 has some key differences. First, rather than being a local definition specific to a test point and its neighbourhood, individual fairness looks at the worst-case behaviour over the whole input space, i.e., simultaneously looking at neighbourhoods around all the input points in \(X\). Furthermore, while adversarial attacks are generally developed for an \(\ell_{p}\) metric, individual fairness is built around a task-specific similarity metric \(d_{fair}\). Intuitively, \(d_{fair}\) needs to encode the desired notion of similarity between individuals (Dwork et al., 2012). While a number of metrics have been discussed in the literature (Ilvento, 2020), we focus on the following ones, which are widely used and can be automatically learnt from data (Yurochkin et al., 2020):1 Footnote 1: While not directly investigated in this paper, our fairness attacks can be generalised to similarity metrics built over embeddings (Ruoss et al., 2020), by attacking the embeddings as well. * **Weighted \(\ell_{p}\).** In this case \(d_{\text{fair}}(x^{\prime},x^{\prime\prime})\) is defined as a weighted version of an \(\ell_{p}\) metric, i.e. \(d_{\text{fair}}(x^{\prime},x^{\prime\prime})=\sqrt[p]{\sum_{i=1}^{n}\theta_{i} |x^{\prime}_{i}-x^{\prime\prime}_{i}|^{p}}\), where the weights \(\theta_{i}\) can be set accordingly to their correlation with the sensitive attribute. * **Mahalanobis distance.** In this case we have \(d_{\text{fair}}(x^{\prime},x^{\prime\prime})=\sqrt{(x^{\prime}-x^{\prime \prime})^{T}S^{-1}(x^{\prime}-x^{\prime\prime})}\), for a given positive semi-definite (SPD) matrix \(S\). Intuitively, \(S\) accounts for the intra-correlation of features to capture latent dependencies w.r.t. the sensitive features. ## 3 Fairness attacks Definition 1 could be reformulated according to the following optimization problem \[\delta^{*}=\max_{x^{\prime}\in X}\max_{\begin{subarray}{c}x^{\prime\prime} \in X\\ d_{fair}(x^{\prime},x^{\prime\prime})\leq\epsilon\end{subarray}}|\pi(x^{\prime })-\pi(x^{\prime\prime})|. \tag{1}\] Checking IF is then equivalent to check whether \(\delta^{*}\leq\delta\) or not. In Eqn (1), the inner maximization problem finds the point in a specific \(d_{fair}\)-neighbourhood that maximizes the variation in the predictive posterior, while the outer one considers the global part of the definition. We proceed by solving the inner-optimisation problem with gradient-based techniques, and rely on statistical methods to solve with high confidence the outer problem. Inner ProblemBecause of the non-convexity of NN architectures, exact computation of the inner maximization problem in Eqn (1) is generally infeasible. We proceed by adapting gradient-based optimization techniques commonly used to detect adversarial attacks to our setting. In particular, in the case of the Fast Gradient Sign Method (FGSM) (Goodfellow et al., 2015), given the log-loss \(L\) associated to the likelihood function (e.g., the binary entropy for the sigmoid), and a point \(x\) with associated label \(y\), we obtain that: \[x_{x^{\prime},\max}\approx x^{\prime}+\eta\cdot\text{sgn}(\mathbb{E}_{w\sim p (w|\mathcal{D})}[\nabla_{x}L(f^{w}(x),y)]).\] \(\eta\in\mathbb{R}^{n}\) scales the attack strength according to \(\epsilon\) and \(d_{fair}\). It is the first ingredient that needs to be adapted for \(\epsilon\)-\(\delta\)-IF. In the case of the weighted \(\ell_{p}\) metric, \(\eta\) can be simply defined as the vector with entries \(\eta_{i}=\epsilon/\sqrt{\theta_{i}}\). For the Mahalanobis distance, given \(x^{\prime}\) and \(\epsilon\), \(d_{fair}(x^{\prime},x^{\prime\prime})\leq\epsilon\) describes a hyper-ellipsoid centred in \(x^{\prime}\). We can hence set \(\eta\) to be the vector whose generic component is scaled according to the ellipsoid axis, i.e., \(\eta_{i}=\epsilon\sqrt{S_{ii}}\). Furthermore, while one is guaranteed to remain inside the \(d_{fair}\)-neighbourhood in the \(\ell_{p}\) case, for the Mahalanobis metric one needs to check whether the attack is inside the \(\epsilon\)-\(d_{fair}\) ball or not. In the latter case, we set up a local line search problem to project \(x_{x^{\prime},\max}\) back to a point inside the ball. Other gradient-based attacks such as PGD (Madry et al., 2018) or CW (Carlini and Wagner, 2017) can be similarly adapted by taking consecutive gradients. Similarly, other more general attacks could be employed (Yuan et al., 2020; Daubener et al., 2020) as well as training techniques (Liu et al., 2019; Wicker et al., 2021). In Section 4, we will consider both FGSM and PGD, which we denote with _Fair-FGSM_ and _Fair-PGD_ to distinguish them from the adversarial attack counter-parts. Outer ProblemThe final step needed for the computation \(\epsilon\)-\(\delta\)-IF is to check whether: \[\max_{x^{\prime}\in X}\left(\pi(x_{x^{\prime},\max})-\pi(x^{\prime})\right)\leq\delta. \tag{2}\] Again, exact computation of Eqn (2) is infeasible as it would require solving a non-convex optimisation problem over a possibly large input space. We relax the problem and instead compute the probability that a point sapled from \(X\) is fair. In order to do that we iid sample \(n_{\text{samples}}\) points from \(X\) and build the following empirical estimator: \(\hat{p}=\sum_{i=1}^{n_{\text{samples}}}\frac{\phi(x_{i})}{n_{\text{samples}}}, \quad\text{where }\phi(x_{i})=\begin{cases}1&\text{if }\pi(x_{x_{i},\max})-\pi(x_{i})\leq \delta\\ 0&\text{otherwise}\end{cases}.\) Note that \(\phi(x_{i})\) is a binary variable. Consequently, \(\hat{p}\) approximate the probability that a point sampled from \(X\) satisfies the inner condition in Eqn (1). Lemma 2 below is a straightforward application of the Chernoff bound (Hellman and Raviv, 1970) and guarantees that if \(n_{\text{samples}}\) is large enough, then the approximation error introduced by the empirical estimator can be made arbitrarily small with high confidence. **Lemma 2**.: _Assume that \(n_{\text{samples}}\) points are sampled iid from \(X\) according to a (possibly unknown) probability measure \(P.\) Call \(p=\mathbb{E}_{x^{\prime}\sim P}[\phi(x^{\prime})].\) Then for any \(\gamma,\theta>0\) such that \(n_{samples}>\frac{1}{2\theta^{2}}\log\left(\frac{2}{\gamma}\right),\) it holds that \(P(|\hat{p}-p|>\theta)\leq\gamma.\)_ Therefore, in order to approximate the solution of the optimisation problem in Equation 2 with statistical confidences \(\theta\) and \(\gamma\) we compute the smallest \(\delta\) such that \(\hat{p}=1\) with \(n_{samples}\) points randomly sampled from the input space. ## 4 Results We employ _Fair-FGSM_ and _Fair-PGD_ to evaluate and compare the IF of deterministic Neural Networks (dNNs) and Bayesian Neural Networks (BNNs). We approximate the BNNs' posterior either with VI (in particular using variational online Gauss-Newton (Khan et al., 2018)) or with Hamiltonian Monte Carlo (HMC) (Neal, 2012). The NNs are trained to solve the tasks associated to the Adult (Dua and Graff, 2017) and the Folktables (Ding et al., 2021) datasets, where we take gender as the sensitive.2 Footnote 2: Code to run the experiments can be found at [https://github.com/alicedoherty/bayesian-fairness](https://github.com/alicedoherty/bayesian-fairness). Parametric AnalysisWe evaluate how IF is affected by architectural parameters. Namely, across the two similarity metrics, we begin by fixing \(\epsilon=0.1\) and analysing how \(\delta\) is affected by the numbers of hidden layers and of neurons per layer, in ranges typically used for these datasets (Yurochkin et al., 2020; John et al., 2020; Yeom and Fredrikson, 2021; Benussi et al., 2022). The results for these analyses are plotted in the heatmaps in Figure 1. The three columns of heatmaps respectively show the IF estimations for models learned with dNN, VI, and HMC. The first two rows give the results for the Adult dataset, respectively for _Fair-FGSM_ with the weighted \(\ell_{p}\) metric and _Fair-PGD_ with the Mahalanobis similarity metric. The last two rows show the analogous results on the Folks dataset. We immediately observe how, in the overwhelming majority of the cases, the models learnt by HMC are markedly fairer (i.e., they have lower values of \(\delta\)) than their deterministic counter-parts. The same generally applies also to VI models, except for the IF obtained on the Adult dataset by means of _Fair-PGD_ for the Mahalanobis metric (second row, second column). This combined observations almost perfectly mirrors the behaviour of approximate Bayesian inference in adversarial robustness settings (Carbone et al., 2020), albeit here over the whole input space, and around \(d_{fair}\)-neighbourhoods. Interestingly, both adversarial robustness and overfitting generally worsen as the model becomes deeper. Instead, we observe that more often than not this pattern is reversed in IF. We posit that the model is more robust than the model, and we can see that the model is more robust than the model. Figure 1: Heatmaps for the estimations of individual fairness (\(\delta\)) across various architectures, inference methods (dNN, VI or HMC), datasets, and similarity metrics. The first two rows give results for the Adult dataset (respectively _Fair-FGSM_ with \(\ell_{p}\), and _Fair-PGD_ with Mahalanobis). The last two for the Folktables dataset. while the NNs are indeed becoming more fragile to attacks as they get deeper, it is also less reliant on each specific input feature, rather it builds on a non-trivial representation of them - therefore having a tendency to perform better on IF tasks. On the other hand, as expected, when the number of neurons per layer increases, we get less fair models. In the left plot of Figure 2 we analyse how individual fairness is affected by \(\epsilon\), i.e., the maximum dissimilarity allowed between two individuals. We give a selection of the results for an architecture with 3 layers and 16 neurons (thick line) or 32 neurons (dashed line) trained on the Adult dataset. Of course, we observe that as \(\epsilon\) increases \(\delta\) increases as well, as a greater \(\epsilon\) implies that we are looking at the models' behaviour on increasingly more dissimilar pairs of individuals. Interestingly, we find that BNNs' tendency to be more fair is consistent across the different values of epsilon. Analysis of Posterior PredictiveWe experimentally inquire on the reason behind BNNs' improved individual fairness. To this end, for a given BNN posterior predictive distribution, we solve the inner problem of Section 3 over 100 randomly sampled test points. Rather than doing this over the full posterior, we randomly sample 15 times a number \(k\) of weights realisation from the posterior, and compute the predictive posterior over them. We investigate values of \(k\) ranging from 1 (i.e., a single deterministic-like network extracted from the BNN) to 50. We also compare BNNsr, with an averaging of deterministic neural networks in the form of a Deep Ensemble (DE) (Lakshminarayanan et al., 2017). The results for this analysis are given in the right plot of Figure 2. Surprisingly, even when using only one posterior sample from the BNN distribution, the models learnt with Bayesian approximation method are already significantly more fair than with the frequentist DE technique. Furthermore, especially in the case of HMC, the more samples are drawn from the posterior distribution at prediction time, the fairer the model becomes - with a 50% bias reduction when going from 1 to 50. On the other hand, notice that averaging does not help with DE, whose fairness estimation remains almost constant throughout the plot. Figure 2: **Left:** Fairness for different maximum distances \(\epsilon\) for deterministic and Bayesian NNs with 3 layers and either 16 (thick lines) or 32 (dashed lines) neurons per layer. **Right:** Changes in empirical distribution of fairness across deterministic and Bayesian predictors computed over an increasing number of posterior samples. ## 5 Conclusion In this paper we extended adversarial attack techniques to study individual fairness (IF) for BNNs. On a set of experiments we empirically showed that BNNs tend to be intrinsically fairer than their deterministic counter-parts. Furthermore, we showed how the particular approximate inference method has an impact on IF, with more approximate methods being less fair compared to approaches such as HMC. Finally, we have empirically shown how increased fairness is likely due to a combination of Bayesian training and Bayesian averaging, which may have a beneficial effect in reducing the magnitude of the gradients.
2307.09829
What do neural networks learn in image classification? A frequency shortcut perspective
Frequency analysis is useful for understanding the mechanisms of representation learning in neural networks (NNs). Most research in this area focuses on the learning dynamics of NNs for regression tasks, while little for classification. This study empirically investigates the latter and expands the understanding of frequency shortcuts. First, we perform experiments on synthetic datasets, designed to have a bias in different frequency bands. Our results demonstrate that NNs tend to find simple solutions for classification, and what they learn first during training depends on the most distinctive frequency characteristics, which can be either low- or high-frequencies. Second, we confirm this phenomenon on natural images. We propose a metric to measure class-wise frequency characteristics and a method to identify frequency shortcuts. The results show that frequency shortcuts can be texture-based or shape-based, depending on what best simplifies the objective. Third, we validate the transferability of frequency shortcuts on out-of-distribution (OOD) test sets. Our results suggest that frequency shortcuts can be transferred across datasets and cannot be fully avoided by larger model capacity and data augmentation. We recommend that future research should focus on effective training schemes mitigating frequency shortcut learning.
Shunxin Wang, Raymond Veldhuis, Christoph Brune, Nicola Strisciuglio
2023-07-19T08:34:25Z
http://arxiv.org/abs/2307.09829v2
# What do neural networks learn in image classification? ###### Abstract Frequency analysis is useful for understanding the mechanisms of representation learning in neural networks (NNs). Most research in this area focuses on the learning dynamics of NNs for regression tasks, while little for classification. This study empirically investigates the latter and expands the understanding of frequency shortcuts. First, we perform experiments on synthetic datasets, designed to have a bias in different frequency bands. Our results demonstrate that NNs tend to find simple solutions for classification, and what they learn first during training depends on the most distinctive frequency characteristics, which can be either low- or high-frequencies. Second, we confirm this phenomenon on natural images. We propose a metric to measure class-wise frequency characteristics and a method to identify frequency shortcuts. The results show that frequency shortcuts can be texture-based or shape-based, depending on what best simplifies the objective. Third, we validate the transferability of frequency shortcuts on out-of-distribution (OOD) test sets. Our results suggest that frequency shortcuts can be transferred across datasets and cannot be fully avoided by larger model capacity and data augmentation. We recommend that future research should focus on effective training schemes mitigating frequency shortcut learning. Codes and data are available at [https://github.com/nis-research/nn-frequency-shortcuts](https://github.com/nis-research/nn-frequency-shortcuts). ## 1 Introduction Deep neural networks (DNNs) have been widely used to tackle problems in many fields, e.g. medical data analysis, self-driving vehicles, robotics, and surveillance. However, the underlying predictive processes of DNNs are not completely understood due to the black-box nature of their nonlinear multilayer structure [3]. While a DNN can approximate any function [25], its (hundreds of) millions of parameters limit the understanding of function approximation process. Analyzing the learned features is a viable way to understand what triggers the predictions, although explaining how DNNs process data needs further exploration [31]. Researchers worked on explaining the predictions of NNs in terms of their input, using Saliency [29], Gradient-weighted Class Activation Mapping [27] and Layer-wise Relevance Propagation [2]. These techniques highlight the area of an image that contributes to prediction but do not explain why the performance of NNs degrades on OOD data. Recently, an interest in understanding the learning dynamics of NNs from a frequency perspective has grown. NNs are found to learn lower frequencies first in regression tasks [25], as they carry most of the needed information to reconstruct signals [38]. Thus NNs tend to fit low-frequency functions first to data [18]. This biased learning behavior is known as simplicity bias [28], which induces the NNs to learn simple but effective patterns, i.e. shortcuts solutions that disregard semantics related to the problem at hand but are simpler for solving the optimization task. For instance, the frequency shortcuts proposed in [34] are sets of frequencies used specifically to classify certain classes. In this work, we empirically analyze the learning dynamics of NNs for image classification and relate it to simplicity-bias and shortcut learning from a frequency perspective. Our results indicate that simplicity-biased learn Figure 1: Images of ‘container ship’ and ‘siamese cat’ and their DFM-filtered versions with only top-\(5\%\) dominant frequencies (the white dots in the central figures) retained can both be recognized correctly by NNs. ing in NNs leads to frequency-biased learning, where the NNs exploit specific frequency sets, namely _frequency shortcuts_, to facilitate predictions. These frequency shortcuts are data-dependent and can be either texture-based or shape-based, depending on what best simplifies the objective function (e.g. a unique color, texture, or shape associated with a particular class in a dataset, without necessarily other meaningful semantics). This may impact generalization. We demonstrate this phenomenon through texture-based and shape-based frequency shortcuts in Fig. 1. When we retain only specific subsets of frequencies (identified using a method proposed in this paper) from images of 'container ship' and'siamese cat', the classifier can recognize them correctly. Interestingly, when the same sets of frequencies are retained from images of other classes, the predictions are biased towards these two classes, indicating that the frequency sets are specific for their classification. Different from previous work on regression tasks [25], we investigate the learning dynamics and frequency shortcuts in NNs for image classification. Compared to the work uncovering frequency shortcuts [34], we expand the understanding of them and demonstrate that they can be texture, shape, or color, depending on data characteristics. We propose a metric to compare the frequency characteristics of data and investigate systematically the impact of present/absent shortcut features on OOD generalization. In summary, our **contributions** are: 1. We complement existing studies that showed NNs for regression tasks are biased towards low-frequency [25]. For classification, we find that NNs can exhibit different frequency biases, tending to adopt frequency shortcuts based on data characteristics because of simplicity-bias learning. Our analysis provides valuable insights into the learning dynamics of NNs and the factors influencing their behavior. 2. We propose a method to identify frequency shortcuts, based on culling frequencies that contribute less to classification. These shortcuts are composed of specific frequency subsets that correspond to textures, shapes, or colors, providing further insight into the texture-bias identified by Geirhos _et al_. [12] and background-dependency found in [36]. 3. We systematically examine the influence of frequency shortcuts on the generalization of NNs and find that the presence of frequency shortcut features in an OOD test set may give an illusion of improved generalization. Furthermore, we find that larger model capacity and common data augmentation techniques like AutoAugment [5], AugMix [15], and SIN [11] cannot fully avoid shortcut learning. We recommend further research targeting frequency information to avoid frequency shortcut learning. ## 2 Related works Frequency analysis.Recently, Fourier interpretations of NNs were published. For regression tasks, NNs tend to learn low-frequency components first [25, 37], while initial layers bias towards high-frequency components [7]. In classification, NNs exhibit a bias towards middle-high frequency during testing [1]. The authors in [1] argued that the importance of frequency is data-driven. Sensitivity to different frequency perturbations was measured in [39], showing that most NNs are more sensitive to middle-high frequency noise. The impact of high-frequency dependence on the robustness of NNs was investigated in [32]. These analyses show that NNs for regression and classification tasks exhibit different frequency dependencies, while there is a lack of analysis on the learning dynamics of NNs for classification. We study what and how NNs learn in classification, highlighting their data-driven behavior and complementing existing work on regression tasks. We uncover that NNs can learn to use specific frequency sets encompassing both low and high frequencies to achieve accurate classification. Shortcut learning.In classification, decision rules based on spurious correlations between data and ground truth, rather than semantic cues, are known as shortcuts [10]. For example, a network may classify images based on the presence of text embedded in the images, rather than the actual image content [20], negatively impacting generalization [35]. Identifying shortcuts learned by NNs might be helpful to avoid unwanted learning behavior and thus improve generalization. It is easy to identify shortcuts that are artificially added and are visible (e.g. color patches [22], line artefacts [6], or added text [20]). However, for those implicitly existing in data (e.g. particular textures or shapes), their identification is difficult. Most methods focus on mitigating learning shortcut information in data [9, 21, 23, 26], rather than explicitly identifying them. Wang _et al_. [34] investigated shortcut learning from a frequency perspective and proposed the definition of frequency shortcuts. However, their algorithm for shortcut identification is heavily influenced by the order of frequency removal and their observations are limited to texture-based shortcuts. In this paper, our frequency shortcut identification method does not have such limitations. We broaden the understanding of frequency shortcuts, study the data-dependency of shortcut features, and provide a more systematic analysis of the impact of shortcuts on OOD generalization. ## 3 Frequency shortcuts in image classification For regression tasks, it is known that NNs are biased towards learning low-frequency components (LFCs) first during training [25]. This has not been verified for classification tasks. Here we study the learning behavior of NNs in image classification and its relation to shortcut learning and simplicity-bias, using both synthetic (Section 3.1) and natural images (Section 3.2). We use synthetic data to study the learning behavior of NNs and show their tendency to discover shortcuts in the frequency domain. Inspired by the insights gained on the synthetic data, we propose a method based on frequency culling to examine the frequency dependency of NNs trained on natural images, which contain intricate frequency information. This allows us to uncover the frequency shortcuts learned by NNs for classification. ### Experiments on synthetic data Design of synthetic datasets.To study the impact of data characteristics on the spectral bias of NNs and frequency shortcut learning, we generate four synthetic datasets, each with a frequency bias in a different band, from low to high. This allows us to examine the effect of different frequency biases on the learning behavior of NNs. We separate evenly the Fourier spectrum into four frequency bands (see Fig. 2). The bands are denoted by \(B_{1}\) the lowest frequency band, \(B_{2}\) and \(B_{3}\) the mid-frequency bands, and \(B_{4}\) the highest frequency band. Each dataset contains four classes and images of \(32\times 32\) pixels. An image is generated by sampling at least eight frequencies from the frequency bands associated with the target class (see Table 1), according to a probability density function: \[Pr(r)=S\cdot\frac{1}{r+1},\quad\text{with }S=\frac{1}{\sum_{r=1}^{R}\frac{1}{r+1}}.\] \(R\) is the largest radius and \(r=\sqrt{u^{2}+v^{2}}\) is the radius of frequency \([u,v]\). This prioritizes the sampling of LFCs, mimicking the frequency distribution of natural images. We use \(b\in B=\{B_{1},B_{2},B_{3},B_{4}\}\) to control the frequency bias in the generated data. For instance, in the dataset Syn\({}_{b}\) with \(b=B_{1}\), the frequency bands for classes \(C_{0}\) and \(C_{1}\) are \(\{B_{2},B_{3},B_{4}\}\) while class \(C_{3}\) has frequency band \(B_{1}\). To distinguish between \(C_{0}\) and \(C_{1}\), we embed _special patterns_ consisting of a set of frequencies \([u,v]\) (\(u=v\in\{1,3,5,7,9,11,13,15\}\)) into the images of class \(C_{0}\) which are removed from the images of other classes. The design imposes various levels of classification difficulty by incorporating different levels of data complexity for each class (\(C_{3}<C_{0}<C_{1}\approx C_{2}\)), as observed visually. This aids in comprehending the connection between simplicity-bias learning and spectral-bias of NNs in classification. Hypothesis.As noted in the theory of simplicity-bias [28], NNs tend to achieve their objective in the simplest way. As a result, NNs for regression tasks approximate LFCs first compared to HFCs [25, 37, 17, 1]. Based on this, we hypothesize that NNs might prioritize learning to distinguish classes with the most discriminative frequency characteristics in classification. Thus, what the NNs first learn could depend on data bias rather than being limited to low frequencies. This learning behavior could result in frequency shortcut learning, where the NNs focus on specific frequencies to achieve their objective in a simpler way. Data characteristics influence what NNs learn first.We conduct experiments on the synthetic data to test this hypothesis. We train ResNet18 models on the synthetic datasets and expect they can distinguish classes like \(C_{0}\) and \(C_{3}\) easily and from the early stages of training, as they carry more distinctive characteristics than others. To evaluate this, we measure their classification performance in the first \(500\) iterations of training by computing the F\({}_{1}\)-score per class. This provides insight into whether each class is correctly classified and how many false positives each class attracts. We report the obtained \(F_{1}\)-scores (see Fig. 3) and observe that for class \(C_{3}\) (with a clear frequency bias), the \(F_{1}\)-score is generally higher than other classes in the first few iterations, indicating that it is immediately distinguished from others across the four synthetic datasets, followed by class \(C_{0}\). This finding suggests that the more distinguishable characteristics of class \(C_{3}\) play an important role in driving the learning behavior of NNs. Note that, despite the bias in different bands across the four synthetic datasets, class \(C_{3}\) is always learned first, indicating that NNs can learn either low- or high-frequency early in training if they are more discriminative than other frequencies. \begin{table} \begin{tabular}{c c c} \hline \hline **class** & **frequency bands** & **special patterns** \\ \hline \(C_{0}\) & \(B-b\) & ✓ \\ \(C_{1}\) & \(B-b\) & - \\ \(C_{2}\) & \(B\) & - \\ \(C_{3}\) & \(b\) & - \\ \hline \hline \end{tabular} \end{table} Table 1: Design details of a synthetic dataset Syn\({}_{b}\) with \(b\in B=\{B_{1},B_{2},B_{3},B_{4}\}\). The special pattern contains frequencies \([u,v]\) where \(u=v\in\{1,3,5,7,9,11,13,15\}\) are removed from classes other than \(C_{0}\). Figure 2: Evenly separated frequency bands. \(B_{1}\) denotes the lowest band and \(B_{4}\) denotes the highest one. Thus, _what frequencies are learned first by NNs in classification is driven by simplicity-bias and data characteristics._ Data bias and simplicity bias can lead to frequency shortcuts.Based on the frequency characteristics of the synthetic datasets, we examine how NNs find shortcuts in the Fourier domain by comparing the classification results of the NNs tested on the original synthetic datasets and their band-stop versions where two frequency bands in \(B\) are removed. We report the results using relative confusion matrices (see Fig. 4), computed as: \[\Delta^{C_{i},C_{j}}=(Pred_{bs}^{C_{i},C_{j}}-Pred_{org}^{C_{i},C_{j}})/N_{c} \times 100,\] where \(Pred_{bs}^{C_{i},C_{j}}\) is the number of samples from class \(C_{i}\) in the band-stopped test set predicted as class \(C_{j}\), \(Pred_{org}^{C_{i},C_{j}}\) is the equivalent on the original test set, and \(N_{C}\) is the number of samples in class \(C_{i}\). As \(\Delta^{C_{i},C_{i}}\) (\(i=0,1,2,3\)) is larger than or equal to zero, the performance of the model improves or remains the same on the band-stop test sets, indicating that the limited bands provide enough discriminative information for classification, while negative values indicate lower performance. Class \(C_{2}\) in the four synthetic datasets is designed to contain frequencies from all bands. If a model can predict class \(C_{2}\) using only frequencies from partial bands instead of considering frequencies across the whole spectrum, then it is considered to likely be using frequency shortcuts to classify \(C_{2}\). Observed from Fig. 4, \(\Delta^{C_{2},C_{2}}\) are -1 and 1 for models trained on \(Syn_{B_{1}}\) and \(Syn_{B_{4}}\) respectively. The good performance indicates that NNs apply frequency shortcuts in the limited bands for classifying samples of \(C_{2}\). Moreover, \(\Delta^{C_{0},C_{0}}\) of models trained on the four synthetic datasets are close to 0, demonstrating that the NNs can recognize samples of \(C_{0}\) when only part of the frequencies (shortcuts) associated with the _special patterns_ are present in the test data. Similar behaviors are observed for other architectures (see results of AlexNet and VGG in the supplementary material). To summarize, the NNs trained on the four synthetic datasets use frequency differently, but they all adopt frequency shortcuts depending on the data characteristics. ### Experiments on natural images The synthetic experiments show frequency characteristics of data affect what NNs learn. To analyze the more intricate frequency distributions of natural images, we introduce a metric to compare the average frequency distributions of individual classes within a dataset. This facilitates the identification of discriminative and simple class-specific frequency characteristics to learn early in training. While this metric provides valuable insights into the potential learning behavior, a deeper examination of frequency usage by NNs is also needed. To this end, we propose a technique based on frequency culling, which can help uncover frequency shortcuts explicitly. Additionally, we investigate how model capacity and data augmentation impact shortcut learning. As NNs are found to exhibit texture-bias [12] on natural images, we specifically augment data using SIN to create a dataset with more shape-bias. This better demonstrates how texture-/shape-biased data characteristics affect frequency shortcut learning. Figure 4: Relative confusion matrices of models tested on different band-stop synthetic datasets (e.g. \(B_{14}\) indicates the bands \(B_{1}\) and \(B_{4}\) are used). The top-left figure shows the comparison of the results on the original test set and its band-stopped version for the model trained on \(Syn_{B_{1}}\). Other matrices show the results of other models. Most \(\Delta^{C_{i},C_{i}}\) (\(i=0,1,2,3\)) values are close to or larger than 0, indicating good performance on band-stopped datasets due to learned frequency shortcuts. Figure 3: \(F_{1}\)-scores of each class in the first \(500\) training iterations. \(C_{3}\) has higher \(F_{1}\)-scores than others at the early training stage, meaning that it is learned first even if it only has frequencies sampled from the highest frequency band. A frequency distribution comparison metric.From the insights gained on the synthetic experiments, we recognize the importance to examine the frequency characteristics of individual classes within a dataset to understand comprehensively what NNs learn. Thus, we devise a metric called Accumulative Difference of Class-wise average Spectrum (ADCS), which considers that NNs are amplitude-dependent for classification [4]. We compute the average amplitude spectrum difference per channel for each class within a set \(C=\{c_{0},c_{1},\dots,c_{n}\}\) and average it into a one-channel ADCS. The ADCS for class \(c_{i}\) at a frequency \((u,v)\) is calculated as: \[ADCS^{c_{i}}(u,v)=\sum_{\begin{subarray}{c}\forall c_{j}\in C\\ c_{j}\neq c_{i}\end{subarray}}sign(E_{c_{i}}(u,v)-E_{c_{j}}(u,v)),\] where \[E_{c_{i}}(u,v)=\frac{1}{|X^{i}|}\sum_{x\in X^{i}}|\mathcal{F}_{x}(u,v)|\] is the average Fourier spectrum for class \(c_{i}\), \(x\) is an image from the set \(X^{i}\) of images contained in that class, and \(\mathcal{F}_{x}(u,v)\) is its Fourier transform. \(ADCS^{c_{i}}(u,v)\) ranges from \(1-|C|\) to \(|C|-1\). A higher value indicates that a certain class has more energy at a specific frequency than other classes. Impact of class-wise frequency distribution on the learning process of NNs.We choose ImageNet-10 [16], a reduced version of ImageNet [8] for the following analysis. It has lower computational requirements and greater manageability, compared to the full ImageNet dataset. For larger datasets with more classes, one may expect severe shortcut learning behaviors, as the NNs will tend to find quick solutions to simplify a more difficult classification problem. Using ADCS, we find that the classes 'humming bird' and 'zebra' possess certain distinctive frequency characteristics that can be readily exploited by models to distinguish them from other classes at early training stages. The resulting ADCS of _'humming bird'_ (see Fig. 4(a)) indicates that samples from this class have on average much less energy than other classes across almost the whole spectrum. Conversely, the ADCS of '_zebra_' (see Fig. 4(b)) reveals that images from this class have a marked energy preponderance in the middle and high frequencies, as indicated by the prominence of red color in these frequency ranges. To verify the impact of such frequency characteristics on the learning behavior, we train NNs on ImageNet-10. We inspect the frequency bias in the early training phase, by testing models on low- and high-pass versions of the dataset for the first \(1200\) training iterations, rather than the original test set. We compute the recall and precision of each class and observe that the precision of class '_zebra_' (see Fig. 5(a)) and the recall of class '_humming bird_' (see Fig. 5(b)) are generally higher than those of other classes. This shows that these two classes are learned faster than others. In summary, our findings indicate that NNs for classification can learn and exploit substantial spectrum differences among classes, which serve as highly discriminative features at the early learning stage. This further supports our previous observations in synthetic datasets that _what is learned first by NNs is influenced by the frequency characteristics of data_. A frequency shortcut identification method.To identify frequency shortcuts, we propose a method based on culling irrelevant frequencies, similar to the analysis strategy in [1]. We measure the relevance of each frequency to classification by recording the change in loss value when testing a model on images of a certain class with the concerned frequency removed from all channels. The increment in loss value is used as a score to rank the importance of frequencies for classification. Frequencies with higher scores are Figure 5: ADCS of classes ‘humming bird’ and ‘zebra’. Figure 6: Precision and recall rates of ResNet18 trained on ImageNet-10 for the first \(1200\) iterations. considered more relevant for classification, as their absence causes a large increase in loss. We compute a one-channel **dominant frequency map (DFM)** for a class by selecting the top-\(X\%\) frequencies according to the given ranking. Using the DFMs, we study the effect of dominant frequencies on image classification and the extent to which they indicate frequency shortcuts (specific sets of frequencies leading to biased predictions for certain classes). To quantify these, we classify all images in the test set retaining only the top-\(X\%\) frequencies of a certain class (i.e. top-\(X\%\) DFM-filtered test set). We calculate the true positive rate (TPR) and false positive rate (FPR) to evaluate their discrimination power and specificity for a certain class, respectively. We consider classes with high TPR and FPR as instances where the classifier is induced to learn and apply frequency shortcuts. shape or other morphological features of the animal. _The learned frequency shortcuts are impacted significantly by the frequency characteristics of data. They can be texture-based or shape-based and might hinder NNs from learning more meaningful semantics._ There might be cases where frequency shortcuts are not in the data and thus not learned. Model capacity vs. frequency shortcuts.The high TPR and FPR for ResNet50 in Table 2 indicate that it is subject to frequency shortcuts for the classification of classes 'airliner' and 'container ship'. Compared to ResNet18 frequency shortcut for class 'zebra', ResNet50 has lower TPR and FPR, indicating less specific dominant frequencies for classifying 'zebra'. This demonstrates mitigation of learning a frequency shortcut, although learning another shortcut for class 'airliner'. Additionally, VGG16 learns a frequency shortcut for class 'container ship' (TPR=0.7 and FPR=0.42). We show in the following paragraph that frequency shortcuts affect transformers as well, indicating that shortcuts impact networks across different model capacities and architectures. Thus, larger models cannot necessarily avoid it. This commonality shows that frequency shortcut learning is data-driven, which needs to be considered more explicitly to learn generalizable models. Transferability of frequency shortcuts.We trained ViT-B on ImageNet-10 and tested it on images processed with the DFMs we had computed for ResNet18+SIN. This tests the dependency of ViT predictions on small sets of frequency, and the transferability of shortcuts between models or architectures. We present the results in Table 3 and observe shortcuts for the classes'siamese cat' (TPR=0.82, FPR=0.22) and 'container ship' (TPR=0.92, FPR=0.25). Though having a large model capacity, ViT-B is also subject to frequency shortcuts (shape or texture) to classify the samples of certain classes, in line with the observation in [24]. Moreover, the frequency shortcuts learned by ResNet18+SIN can be exploited by ViT-B, further in \begin{table} \begin{tabular}{l c c c c c c c c c c c} \hline \hline & & & & & & & & & & & \\ \hline \hline \multirow{2}{*}{**Model**} & \multicolumn{4}{c}{**airliner**} & **wappen** & **harmoning bird** & **aimense cat** & **on** & **goldin rediscurer** & **blind frog** & **abbrn** & **container ship** & **trainer track** & **wappen** \\ \hline \multirow{2}{*}{**ResNet18**} & \(TPR\) & 0.56 & 0.8 & 0.94 & 0.98 & 0.92 & 0.9 & 0.84 & 0.96 & 0.94 & 0.96 & 0.92 \\ & \(FPR\) & 0.0944 & 0 & 0.078 & 0.067 & 0.0156 & 0.0022 & 0.0344 & 0.0022 & 0.0133 & 0.0222 \\ \cline{2-11} & w/ _dl_ & \(TPR\) & 0.088 & 0.0 & 0.4 & 0.8 & 0.02 & 0.02 & 0.14 & **0.83** & **0.54** & 0.06 \\ & \(FPR\) & 0.0904 & 0 & 0.026 & 0.0156 & 0.0111 & 0.0944 & 0.0022 & **0.11.78** & **0.4889** & 0.0022 \\ \hline \multirow{2}{*}{**ResNet18-AutoLog**} & \multirow{2}{*}{\begin{tabular}{} \end{tabular} } & \(TPR\) & 0.92 & 0.76 & 0.88 & 0.92 & 0.96 & 0.84 & 0.64 & 0.94 & 0.83 & 0.862 \\ & \(FPR\) & 0.0899 & 0 & 0.0299 & 0.0089 & 0.0287 & 0.0111 & 0.0946 & 0.0067 & 0.0222 & 0.056 \\ \cline{2-11} & w/ _dl_ & \(TPR\) & 0 & 0 & 0 & 0.22 & 0.04 & 0.02 & **0.26** & 0.18 & 0 \\ & \(FPR\) & 0 & 0 & 0 & 0.0607 & 0.0222 & 0.0111 & 0 & 0.0009 & 0.0622 & 0 \\ \hline \multirow{2}{*}{**ResNet18-Angliffe**} & \multirow{2}{*}{\begin{tabular}{} \end{tabular} } & \(TPR\) & 0.92 & 0.86 & 0.96 & 0.96 & 0.92 & 0.88 & 0.72 & 0.96 & 0.92 & 0.92 & 0.904 \\ & \(FPR\) & 0.0899 & 0.0602 & 0.0267 & 0.0022 & 0.0222 & 0.0044 & 0.0044 & 0.0044 & 0.0156 & 0.02 \\ \cline{2-11} & w/ _dl_ & \(TPR\) & 0.086 & 0 & 0.22 & 0.34 & 0.22 & 0.34 & 0.02 & 0.16 & **0.88** & 0.26 \\ & \(FPR\) & 0.0067 & 0 & 0.0899 & 0.027 & 0.1511 & 0.0899 & 0.00 & 0.007 & **0.2444** & 0.0087 \\ \hline \multirow{2}{*}{**ResNet18-SIN**} & \multirow{2}{*}{\begin{tabular}{} \end{tabular} } & \(TPR\) & 0.96 & 0.96 & 0.94 & 0.96 & 0.98 & 0.86 & 0.75 & 0.96 & 0.96 & 0.92 & 0.916 \\ & \(FPR\) & 0.0102 & 0.0602 & 0.0178 & 0.0111 & 0.0244 & 0.0044 & 0.0022 & 0.0133 & 0.0156 & 0.01 \\ \cline{2-11} & w/ _dl_ & \(TPR\) & 0.46 & 0 & 0.18 & **0.98** & 0.96 & 0.66 & 0.6 & 0.66 & 0.1 & \\ & \(FPR\) & 0.1257 & 0.0602 & 0.0111 & **0.3457** & 0.0511 & 0.0252 & 0 & 0.0022 & 0.0602 & 0.0133 & 0.013 \\ \hline \multirow{2}{*}{**ResNet10**} & \multirow{2}{*}{\begin{tabular}{} \end{tabular} } & \(TPR\) & 0.9 & 0.78 & 0.86 & 0.94 & 0.86 & 0.82 & 0.78 & 0.94 & 0.94 & 0.93 & 0.662 \\ & \(FPR\) & 0.0944 & 0.0021 & 0.02 & 0.0394 & 0.0367 & 0.0899 & 0.011 & 0.0899 & 0.0244 & 0.022 \\ \cline{2-11} & w/ _dl_ & \(TPR\) & **0.54** & 0 & 0 & 0.42 & 0 & 0.2 & 0 & 0.16 & **0.7** & 0.1 \\ & \(FPR\) & **0.42** & 0 & 0.0022 & 0.002 & 0.0022 & 0.0253 & 0 & 0.0049 & **0.2289** & 0.0156 & 0.01 \\ \hline \multirow{2}{*}{**VCG16**} & \multirow{2}{*}{ \begin{tabular}{} \end{tabular} } & \(TPR\) & 0.96 & 0.84 & 0.92 & 1 & 0.93 & 0.92 & 0.78 & 0.96 & 0.96 & 0.98 & 0.912 \\ & \(FPR\) & 0.0021 & 0.0022 & 0.0211 & 0.013 & 0.0044 & 0.0057 & 0.0022 & 0.0133 & 0.02 \\ \cline{2-11} & w/ _dl_ & \(TPR\) & 0.18 & 0 & 0.06 & 0.22 & 0.12 & 0.04 & 0.06 & **0.7** & 0.22 & \\ \cline{2-11} & \(FPR\) & 0.0133 & 0 & 0 & 0.0444 & 0.1449 & 0.0257 & 0 & 0.0533 & **0.42** & 0.0578 \\ \hline \hline \end{tabular} \end{table} Table 2: ID test: TPRs and FPRs on ImageNet-10 and the top-\(5\%\) DFM-filtered versions (w/ df). Figure 8: Model classifies zebra-pattern clothes with high confidence but misclassifies horse as ox. Mixing images of ‘zebra cloth’ and ‘horse’ increases the confidence of ‘zebra’ predictions. This indicates that the model relies on texture over shape information, its ability to generalize and recognize another animal of similar shape but different texture. dicating that frequency shortcuts are data-driven and can be transferred between models. Data augmentation vs. frequency shortcuts.As commonly techniques to improve generalization performance, we investigate the effect of data augmentation in mitigating frequency shortcut learning. We train ResNet18 with these techniques and report the results in Table 2. AugMix worsens the learned frequency shortcut for 'container ship', but mitigates a frequency shortcut for 'zebra'. AutoAugment partially avoids the frequency shortcuts for both 'zebra' and 'container ship'. SIN causes a frequency shortcut for'siamese cat'. To summarize, appropriate data augmentation may partially reduce frequency shortcut learning, but NNs still tend to find shortcut solutions based on the characteristics of the augmented data. ## 4 Frequency shortcuts and OOD tests Design of OOD test: ImageNet-SCT.To assess how frequency shortcuts affect OOD generalization, we construct a new test set based on previous analysis results, ImageNet-SCT (ShortCut Tests). It consists of 10 classes, each containing 70 images with seven different image styles, including _art, cartoon, deviantart, painting, sculpture, sketch, toy_. This dataset expands the coverage of ImageNet-R [14] in terms of image variations. The classes in ImageNet-SCT are related, to some extent, to those in ImageNet-10. For instance, 'zebra' in ImageNet-10 corresponds to 'horse' in ImageNet-SCT, allowing us to test the effect of an absent texture-based shortcut feature, as horse images contain animals with a very similar shape to zebras, but with no texture. Similarly,'siamese cat' in ImageNet-10 corresponds to 'tabby cat' in ImageNet-SCT, to test the effect of a present shape-based shortcut feature. Furthermore, 'container ship' in ImageNet-10 maps to 'fishing vessel' in ImageNet-SCT, which contains images with similar textures and somehow different shapes (fishing vessels are much smaller boats), enabling us to evaluate the effect of a present texture-based shortcut. Examples of ImageNet-SCT images are provided in the supplementary material. Frequency shortcuts can impair generalization and create the illusion of improved performance.We test the NNs on ImageNet-SCT and its DFM-filtered versions with the top-\(5\%\) dominant frequencies. From the results on the original ImageNet-SCT, we observe a considerable average drop of TPR for all models (see Table 4). Larger model capacity and data augmentations may not always effectively address frequency shortcuts in certain classes, as observed for'siamese cat', 'zebra', and 'container ship' in ImageNet-10 (corresponding to 'tabby cat', 'horse', and 'fishing vessel' in ImageNet-SCT). For example, models relying on texture-based shortcut features for 'zebra' in ImageNet-10 fail to capture shape characteristics and perform poorly on similar-shaped animals like 'horse' in ImageNet-SCT (see Fig. 8). While data augmentations can partially mitigate this effect in ID tests, OOD results for 'horse' still indicate the presence of learned frequency shortcuts. Conversely, 'tabby cat' and 'fishing vessel', which are designed to have similar shape or texture characteristics to their corresponding class in ImageNet-10, exhibit above-average OOD results (higher TPR than average accuracy). Thus, the present shape-based and texture-based shortcut features in the OOD test set are used for classification, giving a false sense of generalization. 'Fire truck' in ImageNet-SCT is a \begin{table} \begin{tabular}{l l l l l l l l l l l l l} \hline \hline \multicolumn{1}{c}{\multirow{2}{*}{**Model**}} & \multicolumn{6}{c}{**ImageNet-SCT**} \\ \cline{3-12} \multicolumn{1}{c}{} & \multicolumn{2}{c}{**military aircraft**} & \multicolumn{1}{c}{**car**} & \multicolumn{1}{c}{**horselet**} & \multicolumn{1}{c}{**halbity cat**} & \multicolumn{1}{c}{**halbition**} & \multicolumn{1}{c}{**horselet**} & \multicolumn{1}{c}{**hercave**} & \multicolumn{1}{c}{**free fing**} & \multicolumn{1}{c}{**horse**} & \multicolumn{1}{c}{**fishing vessel**} & \multicolumn{1}{c}{**free truck**} & \multicolumn{1}{c}{**average**} \\ \hline **ResNet18** & & \(TPR\) & 0.3286 & 0.443 & 0.4249 & 0.2714 & 0.3286 & 0.4 & 0.4413 & 0.0286 & **0.4286** & 0.6413 & 0.3672 \\ & \(FPR\) & 0.0794 & 0.0397 & 0.1952 & 0.0921 & 0.0176 & 0.0587 & 0.0495 & 0.0199 & **0.4283** & 0.0778 & \\ \cline{2-12} & \(\pi/d\) & \(FPR\) & 0.0 & 0.9 & 0.1433 & 0.1986 & 0.0429 & 0.0406 & 0.0571 & 0.1286 & **0.2143** & 0 \\ & \(FPR\) & 0.016 & 0.0 & 0.0545 & 0.056 & 0.0683 & 0.0238 & 0.0683 & 0.0899 & **0.3397** & 0.0016 & \\ \hline **ResNet18-Agnetag** & \(TPR\) & 0.4 & 0.8571 & 0.540 & 0.4 & 0.4857 & 0.4286 & 0.3286 & 0.0 & 0.4 & 0.6413 & 0.4219 \\ & \(FPR\) & 0.0000 & 0.0667 & 0.1699 & 0.0371 & 0.1 & 0.0444 & 0.0020 & 0.0079 & 0.0463 & 0.0669 \\ \cline{2-12} & \(\pi/d\) & \(TPR\) & 0 & 0 & 0.0402 & 0.2143 & 0.0450 & 0.0433 & 0.0585 & 0.0420 & 0.0657 & \\ & \(FPR\) & 0 & 0 & 0.0444 & 0.1016 & 0.0413 & 0.0 & 0.0379 & 0.0778 & 0.0127 & \\ \hline **ResNet18-AugMix** & \(TPR\) & 0.3371 & 0.7286 & 0.4413 & 0.2714 & 0.3857 & 0.4249 & 0.3571 & 0.0286 & **0.4433** & 0.5571 & 0.3657 \\ & \(FPR\) & 0.0984 & 0.1159 & 0.1248 & 0.0818 & 0.0889 & 0.054 & 0.0397 & 0.0111 & **0.04175** & 0.0397 \\ \cline{2-12} & \(\pi/d\) & \(TPR\) & 0.0 & 0.1143 & 0.0429 & 0.2 & 0 & 0 & **0.5** & 0.1429 & \\ & \(FPR\) & 0.0988 & 0.0 & 0.0055 & 0.0456 & 0.1 & 0.0821 & 0.0 & 0.0111 & **0.02** & 0.1069 \\ \hline **ResNet18-SIN** & \(TPR\) & 0.3357 & 0.6 & 0.4285 & **0.4941** & 0.5286 & 0.5714 & 0.4571 & 0.0 & 0.6429 & 0.6857 & 0.48714 \\ & \(FPR\) & 0.0333 & 0.0444 & 0.1016 & **0.0476** & 0.1159 & 0.0635 & 0.0492 & 0.0212 & 0.0127 & 0.0794 \\ \cline{2-12} & \(\pi/d\) & \(TPR\) & 0.0429 & 0 & 0.0714 & **0.2956** & 0.0714 & 0.1714 & 0 & 0.0 & 0.04249 & 0.0286 & \\ & \(FPR\) & 0.0359 & 0.0016 & 0.0222 & **0.4444** & 0.0042 & 0.1016 & 0.0 & 0.0195 & 0.1127 & 0.0005 & \\ \hline **ResNet19** & & \(TPR\) & 0.4266 & 0.4857 & 0.4243 & 0.2 & 0.3714 & 0.3 & 0.3 & 0.3 & 0.3 & 0.3 & 0.3517 & **0.4429** & 0.3743 \\ & \(FPR\) & 0.1444 & 0.054 & 0.0952 & 0.0651 & 0.0984 & 0.0902 & 0.0365 & 0.0272 & **0.8333** & 0.0021 & \\ \cline{2-12} & \(\pi/d\) & \(TPR\) & 0.3429 & 0 & 0.0717 & 0.0429 & 0 & 0.2 & 0 & 0 & **0.4857** & 0.4929 & \\ & \(FPR\) & 0.127 & 0 & 0.0022 & 0.0026 & 0.0 & 0.1444 & 0.0066 & 0.0197 & 0.1922 & 0.0111 & \\ \hline **VCG16** & & \(TPR\) & 0.5410 & 0.6571 & 0.6244 & 0.3 & 0.3571 & 0.7174 & 0.5433 & 0.5486 & **0.5486** & 0.5 & 0.402 \\ & \(FPR\) & 0.0841 & 0.0714 & 0.1288 & 0.0773 & 0.0905 & 0.0942 & 0.0968 & 0.0143 & **0.0111** & 0.0544 & \\ \cline{2-12} & \(\pi/d\) & \(TPR\) & 0.0943 & 0 & 0.0286 & 0.2571 & 0.2143 & good example of generalization, as no shortcuts were identified, allowing models to learn more global and semantic information. Frequency shortcuts can impair generalization and their impact can transfer across datasets, resulting in a misleading impression of generalization with the inclusion of shortcut features in a new test set. Larger models and data augmentation cannot fully counteract these effects, we thus highlight the need to explore novel data augmentation strategies that explicitly target shortcut mitigation, e.g. leveraging DFMs to induce models to exploit more frequencies rather than shortcut frequencies [33] and avoid learning behaviors that may impair the generalizability of NNs. ## 5 Conclusions We conducted an empirical study to investigate what NNs learn in image classification, by analyzing the learning dynamics of NNs from a frequency shortcut perspective. We found from a synthetic example that **NNs learn frequency shortcuts during training to simplify classification tasks, driven by frequency characteristics of data and simplicity-bias**. To address this on natural images, we proposed a metric to measure class-wise frequency characteristics and a method to identify frequency shortcuts. We evaluated the influence of shortcuts on OOD generalization and found that **frequency shortcuts can be transferred to another dataset, in some cases, giving an illusion of improved generalization**. Furthermore, we observed that larger model capacity and data augmentation techniques do not necessarily mitigate frequency shortcut learning. Our study expands previous works on the learning dynamics of NNs for regression tasks, broadens the understanding of frequency shortcuts (which can be either texture-based or shape-based), and provides a more systematic analysis of OOD generalization. We foresee that enhancing the identification of frequency shortcuts and applying proper training schemes that avoid frequency shortcut learning may hold promise in improving generalization. ## Acknowledgements This work was supported by the SEARCH project ([https://sites.google.com/view/search-utwente](https://sites.google.com/view/search-utwente)), UT Theme Call 2020, Faculty of Electrical Engineering, Mathematics and Computer Science, University of Twente.
2303.02535
Streaming Active Learning with Deep Neural Networks
Active learning is perhaps most naturally posed as an online learning problem. However, prior active learning approaches with deep neural networks assume offline access to the entire dataset ahead of time. This paper proposes VeSSAL, a new algorithm for batch active learning with deep neural networks in streaming settings, which samples groups of points to query for labels at the moment they are encountered. Our approach trades off between uncertainty and diversity of queried samples to match a desired query rate without requiring any hand-tuned hyperparameters. Altogether, we expand the applicability of deep neural networks to realistic active learning scenarios, such as applications relevant to HCI and large, fractured datasets.
Akanksha Saran, Safoora Yousefi, Akshay Krishnamurthy, John Langford, Jordan T. Ash
2023-03-05T00:57:28Z
http://arxiv.org/abs/2303.02535v2
# Streaming Active Learning with Deep Neural Networks ###### Abstract Active learning is perhaps most naturally posed as an online learning problem. However, prior active learning approaches with deep neural networks assume offline access to the entire dataset ahead of time. This paper proposes VeSSAL, a new algorithm for batch active learning with deep neural networks in streaming settings, which samples groups of points to query for labels at the moment they are encountered. Our approach trades off between uncertainty and diversity of queried samples to match a desired query rate without requiring any hand-tuned hyperparameters. Altogether, we expand the applicability of deep neural networks to realistic active learning scenarios, such as applications relevant to HCI and large, fractured datasets. ## 1 Introduction Active learning considers a supervised learning situation where unlabeled data are abundant, but acquiring labels is expensive (Settles, 2010; Dasgupta, 2011). One example of this might be classifying underlying disorders from histological images, where obtaining labels involves querying medical experts. Another might be predicting drug efficacy, where labels corresponding to candidate molecules could require clinical trials or intensive computational experiments. In these settings, we typically want to carefully consider what samples to request labels for, and to obtain labels for data that are maximally useful for progressing the performance of the model. Active learning is a classic problem in machine learning, with traditional approaches typically considering the convex and well-specified regime (Settles, 2010; Dasgupta, 2011; Hanneke, 2014). Much recent interest in active learning has turned to the neural network case, which requires some special considerations. One such consideration is the expense associated with fitting these neural architectures -- when used in conjunction with a sequentially growing training set, as one has in active learning, the model cannot be initialized from the previous round of optimization without damaging generalization performance. Instead, practitioners typically re-initialize model parameters each time new data are acquired and train the model from scratch (Ash and Adams, 2020). This structure has repositioned active learning to focus on the batch domain, where we are interested in simultaneously labeling a batch of \(k\) samples to be integrated into the training set. The model is typically retrained only after the entire batch has been labeled. In the convex case, where a model can easily be updated to accommodate for a single sample, active learning algorithms have tended to focus on uncertainty or sensitivity. That is, a label for a given sample should be requested if the model is highly uncertain about its corresponding label, or if incorporating this sample into the training set will greatly reduce the set of plausible model weights. In contrast, a high-performing, batch-mode active learning algorithm must also consider diversity. If two samples are relatively similar to each other, it is inefficient to include them both in the batch, regardless of the model's uncertainty about their labels; Having only one such sample labeled and integrated into the current hypothesis may be enough to resolve the model's uncertainty on the other. Popular approaches for batch active learning rely on samplers that require all unlabeled data to be simultaneously available. This reliance poses several major concerns for the deployment of these algorithms. For one, the run time of these methods is conditioned on the number of unlabeled samples in a way that makes them unusable for extremely large datasets. To exacerbate the issue, it is unclear how to deploy these algorithms on modern databases, where samples might be stored in a fractured manner, and cannot easily be made simultaneously available. It is especially unclear how to perform active learning in a streaming setting, where data are not all simultaneously available, and we do not know how many samples will be encountered. Here we might instead prefer to specify an acceptable labeling rate rather than a fixed acceptable batch size. In this streaming setup, it is further desirable to commit to a decision about whether to include an unlabeled sample in the batch as soon as it is encountered, rather than only after the stream has terminated. As a concrete example, consider an HCI application where a user interacts with the world while wearing an assistive or diagnostic device (Bohus et al., 2022; Singh et al., 2016). The software on the device might involve a classifier that detects objects being interacted with or the activity being performed by the user. Requesting a label to update the model to better classify these phenomena can only be done in the moment; it would be cumbersome to ask the user to provide a label corresponding to an event that occurred far in the past. How can we efficiently identify samples from a data stream for neural networks while respecting a maximum query rate? We propose a simple active learning algorithm, Volume Sampling for Streaming Active Learning (VeSSAL)1, that addresses the concerns mentioned above. VeSSAL is made to accommodate the streaming setting, and as such it only needs to see each unlabeled point once in order to arrive at a decision about whether it should be labeled. This makes VeSSAL attractive even for fixed datasets that might be extremely large or fractured, as these are often interacted with using streaming, distributed database frameworks like Hadoop. VeSSAL is a natural choice for "committal" situations, when labeling decisions need to be made on the fly. On non-sequential datasets, where more conventional active learning algorithms could be exercised, VeSSAL can be significantly faster, especially for large batch sizes. Footnote 1: Code for the implementation of VeSSAL can be found at [https://github.com/asaran/VeSSAL.git](https://github.com/asaran/VeSSAL.git) Despite its simplicity and flexibility, VeSSAL is surprisingly high performing. We show that VeSSAL produces models with predictive capabilities on par with state-of-the-art approaches, even though they are not restricted to this streaming, committal setting. We further demonstrate this to be the case in adversarial situations, where VeSSAL is presented data that have been sorted to induce domain drift. VeSSAL is hyperparameter free, making it a powerful candidate for a wide range of active learning scenarios. The paper proceeds as follows. We present prior related work in Section 2. In Section 3, we present the mathematical formulation for the streaming active learning setting, along with details of our proposed algorithm. We provide empirical support for our proposed approach in Section 4 via experiments on three benchmark datasets and one real-world dataset, in conjunction with two neural network architectures and three different batch sizes. We conclude with a discussion in Section 5. ## 2 Related Work This section situates VeSSAL with respect to prior work on streaming active learning (Sec. 2.1) as well as batch active learning strategies for training neural networks (Sec. 2.2). ### Streaming Active Learning Active learning has enjoyed many successes for problems in the convex learning setting (Hanneke, 2014; Beygelzimer et al., 2009, 2010; Roth and Small, 2006; Beygelzimer et al., 2010; Huang et al., 2015; Hsu, 2010; Hanneke and Yang, 2015). Beygelzimer et al. (2009) propose a statistically consistent method using importance weighting for actively learning binary classifiers under general loss functions. Krishnamurthy et al. (2017) present a version-space based active learning method with performance guarantees for cost-sensitive multiclass classification. Similar to these methods, there is a long line of theoretical work on active learning for linear models (Hanneke, 2014). While these approaches are indeed designed for the streaming setting, they rely on updating the linear hypothesis after each sample is labeled, precluding them from being used in conjunction with deep neural networks, where updating the model is known to be extremely expensive. Successful streaming-based techniques with theoretical guarantees have been developed for the related setting of adaptive sampling and low-rank matrix approximation (Frieze et al., 2004; Deshpande and Vempala, 2006; Deshpande et al., 2006; Ghashami and Phillips, 2014; Bhaskara et al., 2019). These methods find utility in several problem domains such as online PCA (Boutsidis et al., 2014; Bhaskara et al., 2019), online column subset selection (Bhaskara et al., 2019), online k-means clustering (Braverman et al., 2011) etc. We take inspiration from this line of prior theoretical work to design a streaming algorithm for neural batch active learning. ### Batch Active Learning for Deep Neural Networks In recent years, several advancements have been made in the area of pool-based active learning for deep neural networks (Ren et al., 2021). Prior approaches have either employed diversity-based sampling (Sener and Savarese, 2018; Geifman and El-Yaniv, 2017; Gissin and Shalev-Shwartz, 2019), uncertainty-based sampling (Gal et al., 2017; Ducoffe and Precioso, 2018; Beluch et al., 2018) or both (Ash et al., 2020, 2021). Ash et al. (2020) propose a pool-based deep active learning method which leverages gradient embeddings to capture both diversity and uncertainty by pairing a gradient-based representation with an approximate k-DPP sampling technique. Gudovskiy et al. (2020) give an active learning approach to tackle the distribution shift between the train and test data via a feature density matching method using Fisher kernels. Still, these approaches are not designed to handle the streaming setting. Instead, these samplers require access to the entire candidate pool in order to identify a valuable batch of unlabeled points. Several works have designed active learning approaches specifically for image classification (Kovashka et al., 2016) and object detection (Choi et al., 2021; Brust et al., 2018; Senzaki and Hamelain, 2021). Sun and Gong (2019) leverage deep reinforcement learning to train a data selection policy for training deep neural networks to perform image classification. Roy et al. (2018) employ an uncertainty based sampling approach for active learning via the paradigm of query by committee and use the disagreement between convolutional layers for Single Shot Multibox Detector architectures (Liu et al., 2016). However, these approaches are pool-based, i.e. they assume access to the entire dataset ahead of time. Brust et al. (2018) use uncertainty-based margin sampling for streaming active learning with object detectors. They present various methods to aggregate uncertainty estimates for all objects in an image to determine their selection strategy. While their approach is designed for continual object learning settings, it is limited to the problem of object detection and cannot be applied to classification tasks. Figure 1: A comparison in terms of sampling rate for our proposed tuning approach and choosing fixed \(z_{t}\) values (used to scale the probability mass on a candidate point as \(p_{t}=z_{t}\cdot g(x_{t})^{\top}\Sigma^{-1}g(x)\)). We plot the fraction of the batch size that has been selected as a function of the amount of data in the stream that has been encountered. Fixed scaling values can drastically undersample or oversample, and distribute the labeling budget inequitably across the stream. The active learning round is denoted by line opacity, with darker colors corresponding to higher round numbers — here we show the first ten rounds for each strategy. ``` 0: Neural network \(f(x;\theta)\), unlabeled stream of samples \(U\), ideal sampling rate \(q\) 1: Initialize \(t=1\) 2: Initialize \(\widehat{\Sigma}_{0}^{-1}=\lambda^{-1}I_{d}\) {regularized by \(\lambda\) for stability} 3: Initialize \(A_{0}=0_{d,d}\) {covariance over all data} 4: Initialize \(B=\emptyset\) {set of chosen samples} 5:for\(x_{t}\in U\):do 6:\(A_{t}\leftarrow\frac{t-1}{4}A_{t-1}+\frac{1}{t}g(x_{t})g(x_{t})^{\top}\) 7:\(p_{t}=q\cdot g(x_{t})^{\top}\widehat{\Sigma}_{t}^{-1}g(x_{t})(\widehat{ \Sigma}_{t}^{-1}A_{t})^{-1}\) 8:with probability \(\min(p_{t},1)\): 9: Query label \(y_{t}\) for sample \(x_{t}\) 10:\(B\gets B\cup(x_{t},y_{t})\) 11:\(\widehat{\Sigma}_{t+1}^{-1}\leftarrow\widehat{\Sigma}_{t}^{-1}-\frac{\widehat{ \Sigma}_{t}^{-1}g(x_{t})g(x_{t})^{\top}\widehat{\Sigma}_{t}^{-1}}{1+g(x_{t})^{ \top}\widehat{\Sigma}_{t}^{-1}g(x_{t})}\) {rank-1 Woodbury update} 12:else: 13:\(\widehat{\Sigma}_{t+1}^{-1}\leftarrow\widehat{\Sigma}_{t}^{-1}\) 14:\(t\gets t+1\) 15:return labeled batch \(B\) for retraining \(f\) 16:endfor ``` **Algorithm 1** Volume sampling for streaming active learning (VeSSAL) ## 3 Volume Sampling for Streaming Active Learning (VeSSAL) Neural active learning algorithms that incorporate diversity can largely be thought of as making two design decisions. The first decision is how unlabeled candidate samples should be represented. Common choices include using the penultimate layer representation of the current state of the network (Sener and Savarese, 2018) or using a hypothetical gradient that might be induced by a given sample (Ash et al., 2020). In either case, once data are in this space, the second decision is regarding how unlabeled points should be selected in order to encourage batch diversity. In VeSSAL, these decisions are made to produce a high-performing active learner that is amenable to the streaming, committal setting. Specifically, we assume that each candidate \(x_{t}\) is seen only once, and that we must make a decision about whether or not to include it in the batch \(B\) as soon as it is encountered. Once the labeling budget \(k\) is allocated, we retrain the model and repeat the process. VeSSAL performs approximate volume sampling over unlabeled, candidate points in a gradient space with respect to the last layer of the neural network. For a neural network \(f\) with parameters \(\theta\), last-layer parameters \(\theta_{L}\in\theta\) and cross-entropy loss function \(\ell\), the gradient representation for a sample \(x_{t}\) is \[g(x_{t})=\frac{\partial}{\partial\theta_{L}}\ell(f(x_{t};\theta),\widehat{y}_{ t}), \tag{1}\] where \(\widehat{y}\) denotes the most likely label according to the current state of the model, i.e. \(\widehat{y}_{t}=\operatorname*{argmax}f(x_{t};\theta)\). A typical way of doing volume sampling is to sample a batch of points with probability proportional to the determinant of their gram or covariance matrix (Kulesza et al., 2012). In the latter case, this would mean the probability mass on a batch of samples \(B\) is proportional to \[\det\Big{(}\sum_{x\in B}g(x)g(x)^{\top}\Big{)}, \tag{2}\] where \(|B|=k\), the pre-specified labeling budget. There are several reasons to favor the covariance matrix version over the gram matrix. From a theoretical point of view, when used in an outer product, \(g(x)\) could be thought of as a rank-1 approximation of the Fisher information matrix, \(I(x;\theta):=\mathbb{E}_{y\sim p_{\theta}(\cdot|x,\theta)}\nabla^{2}\ell(x,y;\theta)\)(Ash et al., 2021). As such, this construction is reminiscent of a classic goal in active learning, which is to maximize the determinant for the Fisher (MacKay, 1992]. This objective is attractive because, in the realizable setting, it selects samples that maximize the information gained by model parameters after labeling. From a more practical point of view, the covariance matrix will stay fixed in dimensionality even as the batch size \(k\) is changed. In the gram matrix alternative, which is suggested in Ash et al. (2020), the size of the matrix grows with \(k\), potentially becoming intractable for larger batch sizes. A process that samples from a distribution characterized by a determinant like this is referred to as a determinantal point process (DPP). Sampling from a DPP is usually done via Markov Chain Monte Carlo, and making this procedure efficient is an active area of research with wide-ranging statistical applications (Bardenet et al., 2017). Still, the mixing times associated with these algorithms generally makes them too inefficient to be used in conjunction with modern active learning algorithms. For example BADGE (Ash et al., 2020) suggests using kmeans++ as a surrogate for DPP sampling, and Coreset (Sener and Savarese, 2018) uses a furthest-first traversal approach (though not in a gradient space). These sampling approaches are workable surrogates for true volume sampling, but they require all data to be simultaneously accessible. Our algorithm demonstrates that this is not necessary and that near state-of-the-art performance can be obtained by a sampler that (1) sees samples in a streaming fashion, such that each point is only observed once and that (2) commits to a labeling decision as soon as a sample is encountered. VeSSAL selects a sample \(x_{t}\) for labeling with probability \(p_{t}\) proportional to the determinantal contribution of its gradient when considering other items that have already been selected, \[p_{t} \propto\det\left(\widehat{\Sigma}_{t}+g(x_{t})g(x_{t})^{\top}\right)\] \[=\det(\widehat{\Sigma}_{t})(1+g(x_{t})^{\top}\widehat{\Sigma}_{t} ^{-1}g(x_{t}))\] \[\propto g(x_{t})^{\top}\widehat{\Sigma}_{t}^{-1}g(x_{t}).\] Here \(\widehat{\Sigma}_{t}=\sum_{x\in B}g(x)g(x)^{\top}\), the covariance over samples that have already been selected for inclusion in the batch. To compute a \(p_{t}\) in practice, \(g(x_{t})^{\top}\widehat{\Sigma}_{t}^{-1}g(x_{t})\) must be scaled by some value \(z_{t}\), which reflects the labeling budget available to the algorithm. Because the amount of data in the stream might not be known, we consider tuning \(z_{t}\) to reflect a desired labeling frequency \(q\). In HCI applications (Bohus et al., 2022; Singh et al., 2016; Wang et al., 2021), for example, we might not know how long a user will interact with a device (the total number of candidate samples), but we might instead have some sense of the maximum acceptable frequency with which a label can be queried. If we instead do know the total number of samples, then we could consider \(q\) to be the ratio between the labeling budget and the size of the candidate set. Specifically, we desire to find some scalar \(z_{t}\) such that \[\mathbb{E}_{x}[p_{t}]=\mathbb{E}_{x}\Big{[}z_{t}\cdot g(x_{t})^{\top}\widehat {\Sigma}_{t}^{-1}g(x_{t})\Big{]}=q. \tag{3}\] Figure 2: Learning curves for different neural active learning methods tested with i.i.d data streams. Two network architectures, three batch sizes, and three datasets are used for evaluation. These plots have been zoomed to highlight discriminative regions, but exhaustive results are shown in the Appendix, and are aggregated in Figure 3 (a). How should this \(z_{t}\) be chosen? Because the statistics of gradient representations \(g(x_{t})\) vary with the state of \(f\), one fixed value is unlikely to work well across model architectures, datasets, batch sizes, and rounds of data selection. Instead, we aim to find an adaptive strategy, such that we both obtain the desired sampling frequency \(q\) and that we do so in a way that does not allocate probability mass disproportionately across temporal regions of the stream. One option for adaptively adjusting \(z_{t}\) as selection progresses might be a multiplicative weights approach, where we select a scaling parameter from a distribution over a number of \(z_{t}\) values, and constantly update this distribution to reflect whatever choice is giving us the best rate. Another option is to use gradient descent, making \(z_{t}\) larger or smaller at each step in the service of minimizing error between the current and target sampling rate \(q\). Unfortunately, these approaches are unlikely to work well, because the underlying distribution given by the determinantal contribution changes with each selected point. Specifically, every time a sample is added to the batch, and \(\widehat{\Sigma}_{t}\) is updated, the distribution of responses \(g(x_{t})^{\top}\widehat{\Sigma}_{t}^{-1}g(x_{t})\) can change drastically. This domain shift precludes adaptive solutions from efficiently finding a suitable \(z_{t}\) like those mentioned above, because they assume the underlying distribution is stationary. To circumvent this, we simply rewrite the expectation to disentangle \(\widehat{\Sigma}_{t}^{-1}\) from statistics relating to \(g(x)\): \[\mathbb{E}_{x}\Big{[}z_{t}\cdot g(x_{t})^{\top}\widehat{\Sigma}_ {t}^{-1}g(x_{t})\Big{]} =z_{t}\cdot\mathbb{E}_{x}\Big{[}\operatorname{tr}\Big{(}g(x_{t}) ^{\top}\widehat{\Sigma}_{t}^{-1}g(x_{t})\Big{)}\Big{]}\] \[=z_{t}\cdot\mathbb{E}_{x}\Big{[}\operatorname{tr}\Big{(}\widehat {\Sigma}_{t}^{-1}g(x_{t})g(x_{t})^{\top}\Big{)}\Big{]}\] \[=z_{t}\cdot\operatorname{tr}\Big{(}\widehat{\Sigma}_{t}^{-1} \mathbb{E}_{x}\Big{[}g(x_{t})g(x_{t})^{\top}\Big{]}\Big{)}.\] Here, our ability to find a suitable \(z_{t}\) relies only on our ability to estimate the covariance of \(g(x)\), and is not affected by the frequently changing \(\widehat{\Sigma}_{t}^{-1}\). If we approximate \(\mathbb{E}_{x}\Big{[}g(x_{t})g(x_{t})^{\top}\Big{]}\) as \(\nicefrac{{1}}{{t}}\sum_{i=1}^{t}g(x_{i})g(x_{i})^{\top}\), this immediately suggests a way to compute \(p_{t}\) that is amenable to the streaming setting: \[p_{t}=\frac{q\cdot g(x_{t})^{\top}\widehat{\Sigma}_{t}^{-1}g(x_{t})}{ \operatorname{tr}\Big{(}\frac{1}{t}\widehat{\Sigma}_{t}^{-1}\sum_{i=1}^{t}g( x_{i})g(x_{i})^{\top}\Big{)}}. \tag{4}\] Empirically, we find that this auto-tuning of the probability mass on each sample to be far more effective than using a fixed value for \(z_{t}\). In Figure 1, we demonstrate that this approach not only consistently matches the desired labeling frequency \(q\), but it also distributes our labeling budget equitably across the data stream. This is evident from the identity line between proportion of data seen and budget consumed. In contrast, fixed values of \(z_{t}\) can drastically oversample or undersample, by a degree that varies with each round of active learning. Further, because of the nature of the determinant, these fixed-\(z_{t}\) versions sometimes sample far more aggressively in the beginning of the stream than the end. The complete VeSSAL algorithm is presented as Algorithm 1. In it, the estimated covariance over all samples is denoted as \(A\), which is initialized to all zeros. We increment \(\widehat{\Sigma}\) efficiently using a Woodbury update on each chosen sample [Woodbury, 1950]. One interesting note is that our estimate for \(z_{t}\) is only as good as our estimate of \(\mathbb{E}_{x}\big{[}g(x_{t})g(x_{t})^{\top}\big{]}\), which we obtain as a simple average of outer products of the \(g(x_{t})\) vectors observed in the stream. One consequence of this is that the estimate will be biased if data are ordered in the stream in a non-I.I.D. fashion. In the following section, we empirically demonstrate that this appears to not be an issue -- VeSSAL performs on-par with state-of-the-art, non-streaming algorithms regardless of how data are ordered. ``` 1:\(\widehat{\Sigma}_{t}^{-1}\), \ **Baselines.** We compare our method against the following set of baselines. Most of these are non-streaming, meaning they have access to all unlabeled data when selecting samples to query. We also introduce two streaming baselines. * **BADGE:** A recent approach that incorporates both uncertainty and diversity in sampling using k-means++ in the hallucinated gradient space (Eq. 1) (Ash et al., 2020) (non-streaming). * **rand:** A naive random sampling baseline (non-streaming). * **conf:** An uncertainty-based method that selects samples with smallest probability \(p_{\widehat{y}}\) of the top predicted class: \(p_{\widehat{y}}=\max f(x,\theta)\)(Wang and Shang, 2014) (non-streaming). * **coreset:** A diversity-based method that uses a greedy approximation to the k-center problem on representations from the model's penultimate layer (Sener and Savarese, 2018) (non-streaming). * **VeSSAL-pen:** This baseline is similar to VeSSAL but uses the penultimate layer embeddings instead of hallucinated gradients of the last layer, making it a purely diversity-based approach (streaming). * **stream-uniform:** A naive baseline for the streaming setting where data points are sampled at a fixed frequency as they arrive (streaming). At each round of active learning, streaming algorithms are only permitted to see each unlabeled example once, in whatever order it is presented. Further, they must commit to a labeling decision as soon as a sample is encountered, and are unable to refine their decisions as more data arrive. This puts the streaming approaches at a marked disadvantage in comparison to their non-streaming peers. Figure 3: Pairwise penalty matrix for all experiments with (a) I.I.D. data streams and (b) Non-I.I.D data streams. Each cell corresponds roughly to the amount of times the column algorithm outperforms the row algorithm by a statistically significant amount. Averages are shown at the bottom, where lower values correspond to better-performing algorithms. VeSSAL is the highest-performing streaming approach, and is only tested by BADGE, a non-streaming baseline. **Datasets.** We evaluate all algorithms on three image benchmarks, namely SVHN[Netzer et al., 2011], MNIST[LeCun et al., 1998], and CIFAR10[Krizhevsky, 2009], and one real-world dataset from Bohus et al. [2022]. We refer to this dataset as CLOW and use it with permission from the authors. CLOW is collected through an augmented reality (AR) human-computer interaction device, where users provide object labels through a headset as they interact with objects in their home. This dataset includes 43 object classes and \(\sim 11\)K training samples. More details about the dataset and its preprocessing are described in Appendix A.2. **Setup.** We perform multiple rounds of active learning in all experiments, with a fixed budget \(k\) in each round. On these datasets, where the number of candidates is known, we choose to let the target rate \(q_{t}\) evolve throughout the streaming process as \(q_{t}=\frac{k-|B_{t}|}{n-t}\), where \(n\) is the total number of samples in the unlabeled pool and \(B_{t}\) is the set of samples that have been added to the batch so far. This allows the algorithm to adjust its sampling behavior in case of a flawed approximation of \(\mathbb{E}_{x}[g(x)g(x)^{\top}]\). Still, even with this precaution, the sampling rate seems to be somewhat constant at \(\frac{k}{n}\) (Figure 1). We emphasize that in the case of real-world streaming settings, where the total number of unlabeled samples is unknown, \(q_{t}\) can be set to any desired frequency. It is worth mentioning that this setup technically makes our approach committal only for a fixed round of active learning. That is, a sample that is not selected at one round will be made available again on the next. Our approach is made to work with truly committal environments, but we adopt this setup so that we can readily compare with non-committal, non-streaming, batch-mode active learning benchmarks. After each round of data acquisition, we train the models from scratch and measure accuracy on a held out test set. We primarily experiment with two architectures, a two-layer MLP and an 18-layer ResNet [He et al., 2016]. All datasets are considered with both architectures except for MNIST, for which we only use the MLP. Models are trained with the Adam optimizer [Kingma and Ba, 2014] with a fixed learning rate \(0.001\) until they reach \(>99\%\) training accuracy. We experiment with different budgets per round \(k\in\{100,1\text{K},10\text{K}\}\) for the benchmark datasets and \(k\in\{10,100,1\text{K}\}\) for the CLOW dataset (since it has a total of \(\sim 11\)K training samples). Each experiment was repeated three times, and we report both mean and standard error in our learning curve plots. In all experiments, we start with 100 labeled samples and acquire the rest of the labeled samples via active learning. All methods are implemented in PyTorch [Paszke et al., 2017]. We set the regularization parameter \(\lambda\) in Algorithm 1 to \(.01\) in all VeSSAL experiments. This parameter is needed for numerical stability, ensuring a computable inversion. In the subsections that follow, we show experimental results under two different assumptions about the data stream: I.I.D data streams and adversarially ordered data streams. ### I.I.D. Data Stream In this section we discuss experimental results when the data stream is randomized, meaning there is no induced correlation between consecutive data points or in the order of samples in the stream. We show that VeSSAL is superior or equal to non-streaming algorithms in this setting. Figure 4: Learning curves for different neural active learning methods tested with non-I.I.D. data streams. Two network architectures, two batch sizes, and three datasets are used for evaluation. These plots have been zoomed to highlight discriminative regions, but exhaustive results are shown in the Appendix, and are aggregated in Figure 3(b). Representative learning curves averaged over three replicates for various choices of batch size, dataset, and model architecture in this setting are shown in Figure 2. In each, VeSSAL performs about as well as the highest-performing, non-streaming baseline, despite being restricted to the demands of the streaming setting. Detailed results for each combination of dataset, batch size, and network architecture are shown in Appendix B. We aggregate results following the protocol of Ash et al. (2020). For each active learning experiment, we only consider a subset of labeling budgets where learning is still progressing. This is due to the fact that with labeling budgets approaching the data-set size, all algorithms achieve similar accuracy. At exponentially spaced intervals, we calculate if a given row algorithm outperforms a given column algorithm by a statistically significant margin according to a two-sided t-test. When this happens we increment the corresponding cell of Figure 3(a) by 1 divided by the total number of evaluations in the experiment. See Appendix A.1 for details. Here, higher-performing algorithms are associated with a lower column-wise average, displayed at the bottom of the figure. We see that overall, VeSSAL is the highest performing streaming method. When considering all baselines, VeSSAL is only outperformed by BADGE, which is not encumbered by streaming requirements. In a small number of experiments, VeSSAL surprisingly even manages to outperform BADGE. ### Non-I.I.D. Data Stream To investigate the robustness of our algorithm to non-I.I.D. circumstances, we compare all methods under naturally occurring or artificially induced domain drift, where the observed data distribution is non-stationary. Note that non-streaming baselines are not at all affected by this change, and it is only a burden for streaming approaches that use sequential estimates of data statistics for decision making. Despite the disadvantage, we show that VeSSAL is still performing roughly as well as state-of-the-art, non-streaming baselines. **Artificial Drift** We adversarially sort the unlabeled data to introduce domain drift. To do so, sort two academic datasets (CIFAR-10, SVHN) by their first principal component. **Natural Drift** The CLOW data stream is sorted by the timestamp at which objects were encountered and labeled by a user, and hence naturally contains feature drift (Figure 5). For this dataset, MLP and ResNet-18 architectures have pretrained components. In the MLP case, we use data representations taken from the visual Figure 5: The first round of queries on the data stream from the CLOW dataset for different streaming active learning methods with an MLP and budget \(k=10\) (top: VeSSAL, bottom: uniform rate sampling). Red boxes denote repeated classes and green boxes denote unique classes. VeSSAL only repeats a single class, corresponding to an object view that is quite different than the other selection of the same class. Each queried image is described by its label name and time stamp (in parentheses) depicting the index at which it arrives in the stream. Here, there are about 11.8k candidates, and the indices suggest that sampling mass is well-distributed across the stream. encoder of the multimodal CLIP model (Radford et al., 2021), and although the MLP is trained from scratch at each round, the CLIP feature extractor is fixed. In the ResNet case, we use a model that has been pretrained on ImageNet (Russakovsky et al., 2015), and refine the entire model with actively selected data. Each time new data are acquired, the ResNet is reset to to the ImageNet pretrained weights before being updated. Figure 4 highlights the effectiveness of VeSSAL under the challenging setting of feature drift in the data stream. VeSSAL performs both on par with other non-streaming skyline approaches and better than other streaming baseline methods. Detailed learning curves for all datasets, architectures, and hyperaparameters are shown in Appendix C. In Figure 3(b), a pairwise comparison matrix analogous to Figure 3(a) shows that, with the exception of BADGE, VeSSAL outperforms all streaming and non-streaming baselines in presence of distribution shift. Again, VeSSAL sometimes outperforms BADGE even though BADGE sees all unlabeled data before sampling. Figure 5 contains qualitative evidence that VeSSAL samples diverse images even when data points are correlated and the streaming uniform sampling baseline repeatedly queries duplicate objects. ### Label Amplification The promise of active learning is that it can deliver significantly more predictive power for a fixed labeling budget than naive sampling. This is demonstrated by Figure 6, where algorithm performance is cast in terms of "label amplification" instead of accuracy. As learning progresses, we plot the ratio between the number of samples used by streaming uniform sampling and the number of samples required by active sampling to achieve the same performance. For a good active learning algorithm, labeling amplification will be much larger than one, reflecting the increase in labeling efficiency over passive sampling. Our plots average over all experiments conducted, and despite VeSSAL being constrained to the streaming, committal setting, they show that it is roughly as efficient as BADGE in the I.I.D. streaming case and only slightly worse in the non-I.I.D. case. ## 5 Discussion We presented VeSSAL, a new approach for batch active learning with deep neural networks in a streaming setting. Unlike prior pool-based active learning approaches for deep neural networks, our method can commit to queries as soon as samples are made available to the model from a data stream. Even in fixed-data settings, VeSSAL performs roughly as well as state-of-the-art methods -- despite the fact that they are not hindered by streaming constraints. We envision several potential benefits of the proposed approach, expanding the applicability of neural networks for real-world, interaction-centric applications. Our algorithm can be run in Figure 6: Label amplification for BADGE and VeSSAL with respect to stream-uniform, averaged over all experiments. VeSSAL achieves roughly the same label efficiency as BADGE in the I.I.D. case, even though VeSSAL is limited by streaming requirements. In the non-I.I.D. setting, which is adversarial for streaming algorithms like VeSSAL but not for pool-based algorithms like BADGE, VeSSAL only does slightly worse than BADGE. settings that are inherently streaming, committal, or on datasets that are too large to be entirely stored in one place. Our work also opens up exciting directions for future research. Currently, VeSSAL assumes that the number of label classes is known a priori. In real-world applications, where an interaction environment continues to evolve, the number of label classes may grow over time. Moreover, some queries can be costlier than others. For example, asking a user for an object label in their peripheral vision could distract them from performing the current task. Addressing these challenges are exciting topics for further work. ## Acknowledgments The authors would like to thank Sean Andrist, Dan Bohus, and the Platform for Situated Intelligence (PSI) team at Microsoft Research Redmond for providing inspiration on the problem statement, feedback on our approach, and access to the CLOW dataset.
2309.01067
MQENet: A Mesh Quality Evaluation Neural Network Based on Dynamic Graph Attention
With the development of computational fluid dynamics, the requirements for the fluid simulation accuracy in industrial applications have also increased. The quality of the generated mesh directly affects the simulation accuracy. However, previous mesh quality metrics and models cannot evaluate meshes comprehensively and objectively. To this end, we propose MQENet, a structured mesh quality evaluation neural network based on dynamic graph attention. MQENet treats the mesh evaluation task as a graph classification task for classifying the quality of the input structured mesh. To make graphs generated from structured meshes more informative, MQENet introduces two novel structured mesh preprocessing algorithms. These two algorithms can also improve the conversion efficiency of structured mesh data. Experimental results on the benchmark structured mesh dataset NACA-Market show the effectiveness of MQENet in the mesh quality evaluation task.
Haoxuan Zhang, Haisheng Li, Nan Li, Xiaochuan Wang
2023-09-03T03:38:23Z
http://arxiv.org/abs/2309.01067v1
# MQENet: A Mesh Quality Evaluation Neural Network Based on Dynamic Graph Attention ###### Abstract With the development of computational fluid dynamics, the requirements for the fluid simulation accuracy in industrial applications have also increased. The quality of the generated mesh directly affects the simulation accuracy. However, previous mesh quality metrics and models cannot evaluate meshes comprehensively and objectively. To this end, we propose MQENet, a structured mesh quality evaluation neural network based on dynamic graph attention. MQENet treats the mesh evaluation task as a graph classification task for classifying the quality of the input structured mesh. To make graphs generated from structured meshes more informative, MQENet introduces two novel structured mesh preprocessing algorithms. These two algorithms can also improve the conversion efficiency of structured mesh data. Experimental results on the benchmark structured mesh dataset NACA-Market show the effectiveness of MQENet in the mesh quality evaluation task. Mesh quality; Graph attention; Mesh preprocessing; Structured mesh; Computational fluid dynamics ## 1 Introduction Mesh generation is the first step in numerical calculations in Computational Fluid Dynamics (CFD)[1]. In NASA's research report, titled "CFD vision 2030 study: a path to revolutionary computational aerosciences"[2], mesh generation is listed as one of the six important research areas in the future. In modern CFD applications, mesh quality evaluation is vital in the mesh generation process[3]. More importantly, the mesh quality greatly affects the accuracy of CFD simulation results[4], so researchers began to pay attention to the study of mesh quality evaluation. In the traditional mesh quality evaluation, meshes usually utilize element-based mesh quality metrics. For 2D or surface meshes, there are some quality metrics such as area ratio, element aspect ratio, interior angle size, and smoothness. These metrics cannot comprehensively evaluate the quality of meshes, and there is no objective threshold to measure the quality of the mesh. Instead, the threshold is often determined based on the knowledge and experience of professionals. With the development of deep learning, more and more models have been proposed and have been widely used in computer graphics [5, 6], computer vision [7, 8] and natural language processing[9, 10]. Deep learning technology aims to improve automation and intelligence in various fields, replacing many manual operations. Some researchers have proposed using deep learning models to evaluate structured mesh quality[11, 12, 13, 14]. These methods use convolutional neural networks (CNNs), graph neural networks (GNNs) and other models to achieve good results on structured mesh quality evaluation tasks. However, due to the dynamic topology in different structured data, the features hidden in the meshes are difficult to be dynamically extracted. These deep learning-based evaluation techniques pay roughly the same attention to the majority of the topology in the structured mesh, resulting in a partial loss of accuracy. Thus, structured mesh quality evaluation based on deep learning is still a challenging task. To make deep learning better adapt to the application of graph data, GNNs are proposed[15]. Structured meshes can naturally form a graph, so it is very appropriate to adopt GNNs in structured mesh evaluation task. But existing algorithms for converting structured meshes into graphs cannot express the rich information in meshes well. In this paper, we employ dynamic graph attention to accomplish intelligent mesh quality evaluation. We propose MQENet, a graph attention neural network, for evaluating structured mesh quality without human interaction We also design two efficient structured mesh preprocessing algorithms for converting structured meshes into graphs, which can be adapted to our proposed graph neural network. The main contributions of this study include. * We propose MQENet, a structured mesh quality evaluation neural network based on dynamic graph attention. The network represents meshes as graphs and realizes the graph classification task with mesh quality labels by extracting features from structured meshes. * We propose two novel structured mesh preprocessing algorithms, the point-based graph with proximity distance and the element-based graph with sparse operation. We improve the conversion efficiency compared to existing algorithms and allow the generated graph to have more features. * We evaluate MQENet on the mesh benchmark dataset NACA-Market. Experimental results illustrate the effectiveness of MQENet. Meanwhile, MQENet can process mesh data faster than other models. ## 2 Related work ### Mesh quality evaluation Mesh quality evaluation has been widely studied in various ways. Since it is difficult to define an evaluation function that takes the entire mesh as input, mesh quality evaluation typically utilizes element-based mesh quality metrics. Several mesh quality evaluation metrics have been proposed. Li et al. [16] summarized the mesh quality metrics for commonly used 2D and 3D meshes. For 2D mesh cells, their focus is mainly on two metrics: shape and size. 2D meshes are mainly include unstructured meshes and structured meshes. The metrics for 2D unstructured meshes primarily include element length, aspect ratio and skewness. The aspect ratio of a 2D unstructured mesh [17] is measured as: \[\text{aspect ratio}=\frac{L_{\text{max}(L_{0},L_{1},L_{2})}}{4\sqrt{3}S} \tag{1}\] where \(L_{\text{max}(L_{0},L_{1},L_{2})}\) is the length of the longest side of the unstructured mesh. \(L_{0},L_{1},L_{2}\) are the lengths of the sides of the unstructured mesh and \(S\) is the area of the unstructured mesh. Skewness [18] is a mesh angle metric that indicates how close a mesh cell is to an ideal cell: \[\text{skewness}=\max[\frac{Q_{\text{max}}-Q_{\text{ideal}}}{180-Q_{\text{ideal}}}, \frac{Q_{\text{ideal}}-Q_{\text{min}}}{Q_{\text{ideal}}}] \tag{2}\] where \(Q_{\text{max}}\) and \(Q_{\text{min}}\) are the maximum and minimum angles in the mesh element, respectively. \(Q_{\text{ideal}}\) is the angle for an ideally shaped mesh element. Although these mesh quality metrics are widely used in practice, their thresholds are often highly subjective, and it is difficult to evaluate the quality of a mesh cell from a comprehensive perspective. In addition to the above mesh quality metrics, traditional machine learning methods are also used to evaluate mesh quality. Chetouani [19] proposed a 3D mesh quality metric based on feature fusion, which evaluates the 3D mesh quality by using the Support Vector Regression (SVR) model. Combined with the specified mesh quality metrics and geometric attributes, SVR is used to predict quality scores. Sprave et al. [20] extract low-level attributes through the neighborhood graph of the mesh. They consider the quality index of a mesh element by evaluating and summarizing the neighborhood quality index value. However, traditional machine learning methods generally have weak generalization, making it difficult to deal with data of different distributions after training. In recent years, deep learning has been applied in various fields. To solve the above shortcomings, Chen et al. [11] proposed GridNet, a structured mesh quality evaluation based on CNNs. They also propose the structured mesh dataset NACA-Market. GridNet takes the mesh file as input and automatically evaluates the mesh quality. Chen et al. [12] also proposed an automatic hexahedral mesh evaluation framework, MVENet. It takes hexahedral mesh data as input, and studies the effect of mesh point distribution on numerical accuracy based on region segmentation and deep neural network. They employ a supervised learning process to fit the relationship between the hexahedral meshes and their quality. Wang et al. [13] proposed GMeshNet, a graph neural network to evaluate the quality of structured meshes, which converts the mesh quality evaluation task into a graph classification task. To sum up, traditional mesh quality metrics have significant limitations. These metrics can only evaluate a single aspect of mesh quality and rely on the expertise of professionals. Traditional machine learning methods primarily rely on manual techniques to construct mesh features when evaluating mesh quality. Deep learning-based evaluation techniques struggle to handle dynamic topology in structured meshes, resulting in difficulties achieving satisfactory accuracy. ### Graph neural network Recent research on analyzing graphs with machine learning has gained significant attention due to the expressive power of graphs [21]. GNNs are deep learning based methods that operate on the graph domain. Graph Convolutional Networks (GCNs) are the core part of GNNs. GCNs exploit the graph structure and aggregate node information from neighborhoods in convolutional manners [22]. Graph Attention Network (GAT) is one of the most popular GCN architecture [23], which is considered to be a very advanced architecture in graph representation learning. Zheng et al. [24] used GAT to obtain different levels of representations to efficiently solve natural problems. Leeson et al. [25] proposed GRAVES, an algorithm selection technique in software based on GAT, which enables neural networks to make more accurate predictions by using several attention mechanisms. Cirstea et al. [26] proposed Graph-attention Recurrent neural Networks (GRN) to achieve accurate forecasting of time series. Graph pooling is an essential component of GNNs. In order to obtain effective and reasonable graph representations, many scholars have proposed various types of graph pooling designs. Ying et al. [27] proposed a differentiable graph pooling module called DiffPool, which can generates hierarchical representations of graphs. It can be seamlessly combined with different GNNs' architecture in an end-to-end manner. Ranjan et al. [28] proposed adaptive structure aware pooling, a sparse and differentiable pooling method that addresses the limitations of previous graph pooling architectures. Ma et al. [29] proposed a path integral-based graph neural network (PAN) for graph classification and regression tasks. Taken together, the research on GNNs has been increasing, and it has been applied in some scenarios. Since meshes can be easily represented as graph data, it is most suitable to use GNNs to process meshes. However, to the best of our knowledge, there has been very little research on GNNs for mesh quality evaluation tasks. The potential application value of GNNs need to be further explored. Therefore, this paper proposes a structured mesh quality evaluation network based on graph attention. The proposed method improves the performance on mesh datasets compared with other approaches. ## 3 Methods In this section, we elaborate on the technical details of two efficient mesh preprocessing algorithms and our proposed mesh quality evaluation neural network, MQENet. ### Overview To extract more useful high-level features from mesh elements in structured meshes, we design a neural network based on dynamic graph attention. The architecture of MQENet is shown in Figure 1. Our proposed MQENet employs a hierarchical structure, including graph convolutional layers, graph pooling layers, graph readout operations and a multi-layer perceptron. MQENet utilizes two efficient mesh preprocessing algorithms, the point-based graph with proximity distance and the element-based graph with sparse operation, to convert meshes into graphs. GATv2 [30] and SAGPool [31] are selected as the graph convolution layer and graph pooling layer, respectively. The graph convolution layer learns the weight of neighbor nodes through a dynamic attention mechanism, which can realize the weighted aggregation of neighbor nodes. The graph pooling layer computes the projection score via a learnable vector. These scores are used to select the top-ranked nodes, thereby retaining the most valuable mesh elements. Finally, a Multi-Layer Perceptron (MLP) is used to classify the mesh quality. ### Efficient mesh preprocessing algorithm based on node and element representation GNNs are deep learning models used for processing graph data. This paper takes structured mesh data as input, so how to convert mesh data into graph data is a challenging task. Currently, two algorithms have been proposed [13], but they both have some problems. The point-based graph representations use only mesh nodes as graph nodes and adjacent edges as graph edges. This method does not consider the relationship between non-adjacent meshes. On the other hand, the element-based graph representation considers the diagonal elements as 1, which increases the complexity and time of mesh data conversion. This paper proposes two efficient mesh preprocessing algorithms based on node and element representation. Inspired by Pfaff et al. [32], the point-based graph with proximity distance for structured mesh is designed. It introduces the concept of proximity distance on the basis of the previous point-based graph. For mesh data, a mesh not only have an adjacency relationship with the directly connected mesh, but also have adjacency relationships with other meshes that have relationships but are not connected. Proximity distance edges join other mesh nodes in the graph that have spatial proximity to the current mesh node by given a fixed distance, which enhances features in mesh data. By modifying the diagonal elements, the element-based graph with sparse operation is designed, which improve the efficiency of mesh data conversion. #### 3.2.1 The point-based graph with proximity distance A graph is pair of \(G=(V,E)\), where \(V=\{v_{i}\|i\in N\}\) is the set of vertices from mesh nodes, N is the number of the vertices and \(E=\{e_{ij}\|e_{ij}=(v_{i},v_{j}),(v_{i},v_{j})\in V^{2}\}\) is the set of edges from connections between meshes. For an undirected graph, \(e_{ij}\) is identical to \(e_{ji}\). A graph corresponding to a mesh consists of its own nodes and elements. We obtain the feature matrix \(X_{S}\in\mathbb{R}^{4N*3}\) and the adjacency matrix \(A_{S}\in\mathbb{R}^{4N*4N}\) by analyzing the coordinates in the mesh file, where \(N\) is the number of meshes. Proximity distance edges are created by spatial proximity: that is, given a fixed value Figure 1: The architecture of MQENet. MQENet takes structured meshes as input and uses two mesh preprocessing schemes, the point-based graph and the element-based graph. After preprocessing, GATv2 is used to extract mesh quality features in graph nodes. SAGPool is used to filter the most important graph nodes for mesh quality. It is worth noting that graph readout operations are performed after each graph pooling. Finally, all readout graphs are concatenated and classified to obtain the mesh quality labels. \(r_{P}\) on the order of the smallest mesh edge lengths, we add a proximity distance edge between nodes \(i\) and \(j\) if \(|dist_{ij}|<r_{P}\). So we can get the proximity distance matrix \(A_{P}\).The input feature matrix \(X\) and adjacency matrix \(A\) of mesh are generated by \[X=X_{S},A=A_{S}\cup A_{P} \tag{3}\] This encourages using proximity distance edges to pass information between nodes that are spatially close, but distant in the mesh domain, as shown in Figure 2. We use a coordinate-based sparse matrix to store the obtained feature matrix \(X\) and adjacency matrix \(A\), which can improve the mesh processing speed in MQENet. #### 3.2.2 The element-based graph with sparse operation In addition to using mesh nodes as graph nodes, using mesh elements (mesh cells) as graph nodes is also a representation method. Wang et al. [13] proposed a graph representation technique based on mesh elements. However, they set the diagonal elements of the adjacency matrix to 1 when processing meshes, which undoubtedly increases the subsequent calculation cost. This paper improves the method of Wang et al. by modifying the diagonal elements to 0 and proposes a graph representation method based on mesh elements with sparse operations. First, the feature matrix \(X\in\mathbb{R}^{N*f}\) and the adjacency matrix of mesh nodes \(A_{N}\in\mathbb{R}^{4N*4N}\) based on point are obtained from the raw mesh file, where \(f\) is the number of feature (we found if the mesh is a structured mesh, \(f=6\)). Then, we suppose there is an element management matrix \(E\), where \(E=[e_{ij}]\in\mathbb{R}^{4N*N}\), and \(e_{ij}=1\) if node \(i\) in element \(j\), otherwise \(e_{ij}=0\). Finally, we can obtain the strength matrix between two mesh elements: \[S=E^{T}A_{N}E\] Figure 2: The figure shows the structured mesh of a partial wing. It can be seen that two points A,B are completely unconnected if the proximity distance is not introduced. If two structured mesh share one edge, the strength is 6. So the adjacency matrix of mesh elements \(A\) is \[A_{ij}=\left\{\begin{array}{ll}1,&\mathrm{if}S_{ij}=6\\ 0,&\mathrm{otherwise}\end{array}\right. \tag{5}\] As shown in Figure 3, this paper takes two structured meshes as an example to describe the calculation process of the adjacency matrix of mesh elements \(A\) in detail. Since the calculation does not involve mesh node coordinates, no coordinates are assigned to each node here. Among them, the adjacency matrix of mesh nodes \(A_{N}\) and the element management matrix \(E\) are \[A_{N}=\begin{bmatrix}0&1&0&1&0&0&0&0&0\\ 1&0&1&0&1&0&0&0&0\\ 0&1&0&0&0&1&0&0&0\\ 1&0&0&0&1&0&1&0&0\\ 0&1&0&1&0&1&0&1&0\\ 0&0&1&0&1&0&0&0&1\\ 0&0&0&1&0&0&0&1&0\\ 0&0&0&0&1&0&1&0&1\\ 0&0&0&0&0&1&0&1&0\end{bmatrix},E=\begin{bmatrix}1&0&0&0\\ 1&1&0&0\\ 0&1&0&0\\ 1&0&1&0\\ 1&1&1&1\\ 0&1&0&1\\ 0&0&1&0\\ 0&0&1&0\\ 0&0&0&1\end{bmatrix} \tag{6}\] Through equation (4), the strength matrix can be obtained as \[S=E^{T}A_{N}E=\begin{bmatrix}8&6&6&4\\ 6&8&4&6\\ 6&4&8&6\\ 4&6&6&8\end{bmatrix} \tag{7}\] Through equation (5), the adjacency matrix of mesh elements in figure 3 can be obtained as \[A=\begin{bmatrix}0&1&1&0\\ 1&0&0&1\\ 1&0&0&1\\ 0&1&1&0\end{bmatrix} \tag{8}\] Figure 3: The process of converting mesh elements to graph data. Each mesh cell (mesh element) is treated as a node in the graph. Adjacency relationship only occurs when two meshes share an edge. The mesh preprocessing algorithm proposed in this paper can convert a data with 30,400 meshes into graph data in 0.73 seconds (the algorithm designed by Wang et al. takes 0.79 seconds). ### Dynamic graph attention layers Different from the tasks at the node level, the graph classification task requires global information of the graph data. This information includes both the structural information of the graph and the attribute information of each node. In this paper, GATv2, based on the dynamic graph attention, is selected as the graph convolution layer in MQENet. The difference from GAT is that the dynamic graph attention assigns different attention scores for different nodes, while the scores obtained by the static attention mechanism are exactly the same. This shortcoming makes it difficult for GAT to distinguish the quality of different structured meshes and even hinders GAT from fitting the training data. Some GNNs' architecture regard the weights of all neighbors as the same. To solve this problem, GAT uses the score function \(\alpha:\mathbb{R}^{d}\times\mathbb{R}^{d}\rightarrow\mathbb{R}\) to score node pairs \((h_{i},h_{j})\): \[\alpha(h_{i},h_{j})=\text{LeakyReLU}(b^{T}\cdot[Wh_{i}\|Wh_{j}]) \tag{9}\] where \(b\in\mathbb{R}^{2d^{\prime}},W\in\mathbb{R}^{d^{\prime}\times d}\) are learnable parameters, \(\|\) denotes vector concatenation and LeakyReLU is a type of activation function based on a ReLU. We can obtain a more expressive dynamic attention by modifying the order of operations in the GAT. The problem with the GAT is that \(W\) and \(b\) in its scoring function are used for calculations in sequence, and they are likely to collapse into a linear layer. To solve this problem, we can move \(b^{T}\) out of the non-linear result before performing the operation: \[\alpha(h_{i},h_{j})=b^{T}\text{LeakyReLU}(W\cdot[h_{i}\|h_{j}]) \tag{10}\] In this way, GATv2 has powerful feature extraction capabilities for dynamic topology in structured meshes. For any two structured mesh elements or nodes, we can get the corresponding dynamic attention score \(\alpha\), as shown in Figure 1. Additionally, LayerNorm [33] and LeakyReLU are performed. The feature matrix \(X_{h}^{\prime}\in\mathbb{R}^{N\times n}\) input to the next pooling layer is: \[X_{h}^{\prime}=\text{GATv2}(\text{LeakyReLU}(\text{LayerNorm}(X_{h})),A_{h}) \tag{11}\] where \(X_{h}\in\mathbb{R}^{N\times m}\) and \(A_{h}\in\mathbb{R}^{N\times N}\) are the feature matrix and the adjacency matrix from structured meshes, respectively. \(N\) is the number of the graph nodes, \(m\) is the number of input features and \(n\) is the number of output features. Since GNNs with a number of network layers (more than two layers) are very prone to over-smoothing problems, we added a residual connection mechanism [34] in the graph convolutional layer to prevent this problem: \[X_{h}^{\prime}=X_{h}^{\prime}+X_{h} \tag{12}\] ### Self attention pooling layers Most graph pooling techniques need to calculate the distribution matrix of nodes in the process of clustering. However, the space-time complexity of the distribution matrix calculation algorithm is proportional to the nodes and edges of the graph. Mesh data may contain a large number of nodes and elements, which makes it difficult for most graph pooling techniques to be applied to mesh quality evaluation tasks. Thus, we adopt SAGPool to pool the graph data. As shown in Figure 1, SAGPool adaptively learns the importance of nodes from the graph through graph convolution, and then uses the TopK [35] to discard nodes. Specifically, aggregation operations are used to assign an importance score to each node. \[Z=\tanh(D_{h}^{-}\frac{1}{2}A_{h}D_{h}^{-}\frac{1}{2}X_{h}^{\prime}W_{att}) \tag{13}\] where \(\tanh\) represents the activation function, \(D_{h}\in\mathbb{R}^{N\times N}\)is the degree matrix of \(A_{h}\) and \(W_{att}\in\mathbb{R}^{n\times 1}\) represents the weight parameter. The pooling operation can be performed according to the importance score and the topology of the graph. Based on the score calculated by Equation 13, only \(\lceil kN\rceil\) nodes are reserved. \[idx=\text{top-rank}(Z,\lceil kN\rceil),Z_{\text{mask}}=Z_{\text{idx}} \tag{14}\] where \(k\in(0,1]\) is the pooling ratio. Repeatedly stacking SAGPool can perform graph pooling, and finally reduce the dimensions of each graph to the same dimension. The reconstructed feature matrix and adjacency matrix are obtained by \[X_{h+1}=X_{h[idx,:]}^{\prime}\odot Z_{\text{mask}},A_{h+1}=A_{h[idx,idx]} \tag{15}\] ### Graph readout operations To read out graph data from MQENet, we make graph data more complete by concatenating the global average readout operation and the global maximum readout operation. Due to the different node positions in the graph, it is difficult to obtain an accurate representation of each node. We choose JK-net [36] to obtain an accurate representation at different levels and realize the flexible use of different neighborhood ranges for each node to achieve better structure-aware representation. The final representation is the input to an MLP for classification. Additionally, BatchNorm [37] and LeakyReLU are adopted to stabilize the training process. ## 4 Experiments In this section, we evaluate MQENet on the benchmark structured mesh datasets NACA-Market. The datasets, implementation details and experimental results are introduced. ### Datasets Currently, there are very few benchmark structured mesh datasets available for mesh evaluation tasks. Only two datasets, NACA-Market and AirfoilSet, exist. Among them, only NACA-Market is a public dataset. So we choose the NACA-Market dataset to evaluate MQENet. NACA-Market is a mesh quality evaluation dataset of NACA0012 airfoils with eight labels, totaling 10,240 meshes. Each subset of 1,024 meshes corresponds to airfoils of one size, with a total of ten sizes. All the meshes belong to eight different quality labels, with an average of 1,280 meshes per label. The eight labels consist of three structured mesh quality evaluation metrics, which are orthogonality, smoothness and distribution, as shown in Figure 4. In the experiments, we use ten NACA-Market subsets, which include 1,024 meshes and 8 label, to train MQENet separately and evaluate its performance. ### Implementation details After converting structured meshes into graphs, we set the division ratio of the training data, validation data and test data to 60%,20%,20%, and shuffle the data to ensure the distribution. We set the number of input features, output features, network layers and hidden layer units to 6, 8, 4 and 12, respectively. We select AMSGrad [38] with an initial learning rate of 1e-2 as the optimizer, and dynamically decrease the learning rate according to the training situation. Furthermore, we use negative log likelihood loss as the loss function and add L2 regularization with 1e-4 weight decay. The batch size is set to 32. The pooling ratio for all pooling layers are set to 0.3. The experiments are carried out on an NVIDIA Tesla A100-40G. We use the gradient clipping technique [39] to solve the gradient explosion problem that often occurs in neural networks. At the same time, the early stop method is adopted in the training process to find out the problems in the training process in time. Figure 4: Examples of meshes in NACA-Market. Red represents high quality mesh cells. Blue represents low quality mesh cells. The green part in (c) represents that the mesh smoothness is poor. ### Network evaluation results First, we evaluate the capabilities of MQENet on the test set of the NACA-Market dataset. The quality identification results of structured meshes with different characteristics are shown in Table 1. Among them, W represents the mesh without defects, N-O represents the mesh with poor orthogonality, N-S represents the mesh with poor smoothness, N-D represents the mesh with poor distribution, N-OS represents the mesh with poor orthogonality and smoothness and N-OSD represents the mesh with poor orthogonality, smoothness and distribution. It can be seen that MQENet achieve very good results. The accuracy is all above 81% on eight labels of structured meshes, and the highest can reach 83.31%. This illustrates the effectiveness of MQENet for structured mesh datasets. Then, we can also see that when the mesh has multiple defects, MQENet has the potential to identify the result as a single defect. For example, in the N-OS results, MQENet identified 11.74% of the meshes as N-O. There are similar problems in other labels. We believe that this is a problem of labeling in the dataset. Previous studies have directly divided the NACA-Market dataset into eight categories. In doing so, the eight categories are set as completely independent labels, and the relationship between labels are not considered, which leads to the above problems. Finally, to prove the superiority of our method, we choose to perform comparative experiments with GMeshNet. We use two metrics, recall and accuracy, to measure the precision of the neural network. As shown in Table 2, we can see that the accuracy of GMeshNet trained on NACA-Market datasets is not as good as that of MQENet. In terms of orthogonality and smoothness, the recall can reach 82.96% and 81.64%, which is higher than GMeshNet. In terms of test accuracy, it reaches 82.67%, which is also better than GMeshNet. More importantly, MQENet can process mesh data faster than GMeshNet. In summary, compared to other methods, our proposed MQENet has better accuracy and recall. This demonstrates that MQENet can perform well on structured \begin{table} \begin{tabular}{c c c c c c c c} \hline \hline Labels & W (\%) & N-O(\%) & N-S (\%) & N-D (\%) & N-OS(\%) & N-OD(\%) & N-SD(\%) & N-OSD(\%) \\ \hline W & **83.31** & 0.00 & 2.59 & 12.91 & 0.00 & 0.00 & 1.19 & 0.00 \\ N-O & 0.00 & **82.96** & 0.00 & 0.00 & 7.30 & 9.74 & 0.00 & 0.00 \\ N-S & 9.52 & 0.00 & **81.64** & 0.00 & 7.76 & 0.00 & 1.08 & 0.00 \\ N-D & 0.00 & 0.00 & 0.00 & **82.48** & 0.00 & 10.01 & 7.51 & 0.00 \\ N-OS & 0.00 & 10.74 & 7.05 & 0.00 & **82.21** & 0.00 & 0.00 & 0.00 \\ N-OD & 0.00 & 9.67 & 0.00 & 0.00 & 0.00 & **82.79** & 0.00 & 7.54 \\ N-SD & 0.00 & 0.00 & 10.42 & 3.03 & 0.00 & 0.00 & **81.38** & 5.17 \\ N-OSD & 0.00 & 2.93 & 0.00 & 0.00 & 0.00 & 9.05 & 5.11 & **82.91** \\ \hline \hline \end{tabular} \end{table} Table 1: Confusion matrix of the MQENet on NACA-Market. The diagonal elements represent the percentages for which the predicted label is equal to the true label for different structured meshes. \begin{table} \begin{tabular}{c c c} \hline \hline Mesh property & Network & MQENet(\%) & GMeshNet(\%) \\ \hline Orthogonality & 82.96 & 80.73 \\ Smoothing & 81.64 & 78.57 \\ Distribution & 82.48 & 82.69 \\ Accuracy & 82.67 & 80.93 \\ \hline \hline \end{tabular} \end{table} Table 2: Recall and accuracy of MQENet and GMeshNet. For another network in comparison, the results are obtained by re-running the code on NACA-Market dataset using the same training-test partition as our method. mesh quality evaluation tasks. The network structure we designed can well adapt to the dynamic topology in different structured meshes. By filtering important nodes, hidden features in structured meshes can be more accurately captured. So that MQENet can better classify structured meshes with different quality labels in the feature space. ### Ablation studies In this section, to illustrate the effectiveness of MQENet, this paper performs ablation studies from three aspects: mesh preprocessing algorithms, hyper-parameters and network structure. #### 4.4.1 Analysis of mesh preprocessing algorithms Section 3.2 proposes two efficient structured mesh preprocessing algorithms. We conduct experiments on two preprocessing algorithms separately. However, we only adopt the element-based graph with sparse operation to evaluate MQENet in the experiments. Compared with the work of Wang et al., we introduce proximity distance in the point-based graph, which slightly improve accuracy, but the experimental results are still unsatisfactory. We find that using the point-based graph does not even achieve 70% accuracy in the mesh quality evaluation task. The point-based graph with proximity distance is a point-level representation scheme, which cannot well represent the complex topology in structured meshes. And the mesh density at the bordering is usually high due to subsequent simulation requirements. This directly leads to numerous nodes being generated at corresponding space of point-based graphs. Training GNNs on a large number of graph nodes suffers from the neighbor explosion problem, where the dependencies of nodes grow exponentially with the number of message passing layers. But these mesh points at the bordering are often an important factor to determine the quality of structured meshes. #### 4.4.2 Analysis of hyper-parameters In this part, we analyze the impact of different hyper-parameters on MQENet. Choosing appropriate hyper-parameters can improve the efficiency and accuracy of neural networks. We discuss two factors, activation function and pooling ratio. The experimental results are shown in Table 3. The pooling ratio is a parameter for each selection of nodes in the graph pooling layer. We can see that the good results in each activation function at the pooling ratio of 0.4, which means that when the pooling ratio is larger, more features will be retained. The result is relatively poor at the pooling ratio of 0.2, which shows that deleting too many nodes can reduce the size of the neural network, but the accuracy also decreases. \begin{table} \begin{tabular}{c c c c c} \hline \hline Pooling ratio & Activation function & ELU(\%) & ReLU(\%) & GeLU(\%) & LeakyReLU(\%) \\ \hline 0.2 & 80.72 & 80.87 & 80.55 & 81.22 \\ 0.3 & 81.70 & 81.82 & 81.49 & 82.67 \\ 0.4 & 81.95 & 82.36 & 82.15 & 82.78 \\ \hline \hline \end{tabular} \end{table} Table 3: Accuracy for different hyper-parameters The other important hyper-parameter is the activation function. It allows the neural network to learn a smooth curve to segment the plane, instead of using a complex linear combination to approximate the smooth curve. We selected four activation functions of ELU, ReLU, GeLU, and LeakyReLU for experiments. In the case of the same pooling ratio, the accuracy of using LeakyReLU as the activation function is the best. Compared to other activation functions, the LeakyReLU function allows negative values to pass, which makes the dynamic range wider and allows neurons to activate more easily. For MQENet, the LeakyReLU function can better activate the ability of neurons to capture the quality features of the structured mesh. And it is more suitable for dynamic graph attention in the model. #### 4.4.3 Analysis of network structure In addition to the influence of mesh preprocessing algorithms and hyper-parameters on the MQENet, the model selection of the network structure is also important. As the core part of the GNN, the graph convolutional layer is the factor most likely to have an impact on the accuracy. Here we choose several classic graph convolutional layers to compare with GATv2, namely GCN [22], GraphConv [40] and GAT [23]. Experimental results of different networks are shown in Table 4. From the results, we can see that GATv2 performs better than other graph convolutional layers on NACA-Market datasets. GATv2 can enhance the representation of hidden features in structured meshes by computing the dynamic attention score of each node. Then features related to mesh quality are mined to enable the ability to classify structured meshes with different labels. Other graph convolutional layers are difficult to deal with dynamic topology in structured meshes due to no attention mechanism or only static attention mechanism. ## 5 Conclusions The quality of the mesh is a critical factor in the accuracy of CFD simulations. However, traditional mesh quality metrics and learning-based mesh quality evaluation techniques have been unable to meet the industry's increasing demand for higher CFD simulation accuracy. Nodes and cells in structured meshes can naturally form a graph, so it is very appropriate to adopt graph neural networks in mesh evaluation task. Therefore, a structured mesh quality evaluation nerual network based on dynamic graph attention, MQENet, is proposed. We also design two novel structured mesh preprocessing algorithms, the point-based graph with proximity distance and the element-based graph with sparse operation, to convert meshes to graphs, which further enhance the conversion efficiency. We evaluate MQENet on the structured mesh datasets NACA-Market and demonstrate that MQENet outperforms other methods in the mesh quality evaluation task. \begin{table} \begin{tabular}{c c} \hline \hline Method & Accuracy(\%) \\ \hline GCN & 77.87 \\ GraphConv & 78.14 \\ GAT & 80.93 \\ GATv2 & **82.67** \\ \hline \hline \end{tabular} \end{table} Table 4: Accuracy for different networks In future work, we will further explore neural networks suitable for point-based graphs. To address the problem of labels not being independent in NACA-Market datasets, we plan to change the eight-category problem to a multi-label problem (three labels), which may avoids the problem of mutual influence of defects. ## Availability of data and materials The datasets during the current study are available in [https://github.com/cheminhal1234/NACA-Market](https://github.com/cheminhal1234/NACA-Market). ## Ethics approval and consent to participate Not applicable. ## Competing interests The authors declare that they have no competing interests. ## Consent for publication Not applicable. ## Authors' contributions Hasovan Zhang: Conceptualization, Methodology, Writing, Haisheng Li: Funding acquisition, Project administration, Data curation. Nan Li: Formal analysis, Writing, Supervision. Xiaochuan Wang: Investigation, Writing, Supervision. ## Funding This work is supported by the National Natural Science Foundation of China (No. 62277001, No. 62272014, No. 62201017), and Scientific Research Program of Beijing Municipal Education Commission KZ202110011017. ## Acknowledgements This work was carried out at National Supercomputer Center in Tianjin, and the calculations were performed on Tianhe new generation Supercomputer. We also thank the Beijing Technology and Business University 2023 Postgraduate Research Ability Improvement Program Project for funding. ## Authors' information Hasovan Zhang, male, received his bachelor's degree from Beijing Technology and Business University, China in 2022. He is currently pursuing a Master of Engineering degree from Beijing Technology and Business University, China. His research interests include deep learning and mesh generation. Haiheng Li (corresponding author), male, received his Ph.D. degree in computer graphics from Beihang University, China in 2002. He is a professor in school of Computer Science and Engineering, Beijing Technology and Business University, China. His current research interests include mesh generation, computer graphics and intelligent information processing etc. Nan Li, male, received his bachelor's and Ph.D. degrees from Beijing Jiaotong University in 2004 and 2010, respectively. He is a professor of Beijing Technology and Business University. His main research interests are design methodology, computer graphics and intelligent engineering. Xiaochuan Wang, male, received his master's and Ph.D. degrees from Beihang University in 2012 and 2019, respectively. He is an associate professor of Beijing Technology and Business University. His main research interests are virtual reality and computer graphics. ## Author details \({}^{1}\)School of Computer Science and Engineering, Beijing Technology and Business University, Beijing 100048, China. \({}^{2}\)Beijing Key Laboratory of Big Data Technology for Food Safety, Beijing 100048, China. \({}^{3}\)National Engineering Laboratory For Agri-product Quality Traceability, Beijing 100048, China.
2303.11207
Investigating Topological Order using Recurrent Neural Networks
Recurrent neural networks (RNNs), originally developed for natural language processing, hold great promise for accurately describing strongly correlated quantum many-body systems. Here, we employ 2D RNNs to investigate two prototypical quantum many-body Hamiltonians exhibiting topological order. Specifically, we demonstrate that RNN wave functions can effectively capture the topological order of the toric code and a Bose-Hubbard spin liquid on the kagome lattice by estimating their topological entanglement entropies. We also find that RNNs favor coherent superpositions of minimally-entangled states over minimally-entangled states themselves. Overall, our findings demonstrate that RNN wave functions constitute a powerful tool to study phases of matter beyond Landau's symmetry-breaking paradigm.
Mohamed Hibat-Allah, Roger G. Melko, Juan Carrasquilla
2023-03-20T15:40:28Z
http://arxiv.org/abs/2303.11207v3
# Investigating Topological Order using Recurrent Neural Networks ###### Abstract Recurrent neural networks (RNNs), originally developed for natural language processing, hold great promise for accurately describing strongly correlated quantum many-body systems. Here, we employ 2D RNNs to investigate two prototypical quantum many-body Hamiltonians exhibiting topological order. Specifically, we demonstrate that RNN wave functions can effectively capture the topological order of the toric code and a Bose-Hubbard spin liquid on the kagome lattice by estimating their topological entanglement entropies. We also find that RNNs favor coherent superpositions of minimally-entangled states over minimally-entangled states themselves. Overall, our findings demonstrate that RNN wave functions constitute a powerful tool to study phases of matter beyond Landau's symmetry-breaking paradigm. ## I Introduction Landau symmetry breaking theory provides a fundamental description of a wide range phases of matter and their phase transitions through the use of local order parameters [1]. Despite the fact that a great deal of our theoretical and experimental investigations of interacting quantum many-body systems have been developed with the aim of studying local order parameters, it is well-known that the most intriguing strongly correlated phases of matter may not be easily characterized through these observables. Instead, several states of matter seen in modern theoretical and experimental studies are characterized using non-local order parameters that rely on the phases' topological properties [2; 3; 4]. Topological order, in particular, refers to a type of order characterized by the emergence of quasi-particle anyonic excitations, topological invariants, and long-range entanglement, which typically do not appear in traditional forms of order. As a result of these properties, topologically ordered phases have been suggested as an important building block for the development of a protected qubit resistant to perturbations and errors [5; 6; 7]. Interestingly, such qubits have been devised recently at the experimental level [8]. While most manifestations of topological order are dynamical in nature--e.g. anyon statistics, ground state degeneracy, and edge excitations [9]-topological order can also be characterized directly in terms of the ground state wave function and its entanglement. In particular, a probe for topological order is the topological entanglement entropy (TEE) [9; 10], which offers a characterization of the global entanglement pattern of topological ground states not present in conventionally ordered systems. Notably, the TEE is readily accessible for large classes of topological orders [9; 11], in numerical simulations based on quantum Monte Carlo (QMC) [12; 13; 14] and density matrix renormalization group (DMRG) [15; 16], as well as in experimental realizations of topological order based on gate-based quantum computers [4]. Machine learning (ML) techniques offer an alternative approach study quantum many-body systems and have proved useful for a wide array of tasks including the classification of phases of matter [17; 18; 19; 20], quantum state tomography [21; 22], finding ground states of quantum systems [23; 24; 25; 26; 27; 28; 29; 30; 31; 32], studying open quantum systems [33; 34], and simulating quantum circuits [35; 36; 37], among many others [38; 39; 40; 41]. In particular, neural networks representations of quantum many-body states have been shown to be able of expressing topological order using, e.g., restricted Boltzmann machines [42; 43; 44; 45], convolutional neural networks [17] and autoregressive neural networks [46]. Here we use recurrent neural networks (RNN) [47; 48; 49] as an ansatz wave function [31; 32] to investigate topological order in 2D through the estimation of the TEE. We focus on two model Hamiltonians exhibiting topological order, namely Kitaev's toric code [6; 7] and a Bose-Hubbard model on the kagome lattice previously shown to host a gapped quantum spin liquid with non-trivial emergent \(\mathbb{Z}_{2}\) gauge symmetry [12; 14; 50]. In our study, we use Kitaev-Preskill constructions [10], and finite size-scaling analysis of the entanglement entropy to extract the TEE. We find convincing evidence that RNNs are capable of expressing ground states of Hamiltonians displaying topological order. We also find evidence that the RNN wave function is naturally biased toward finding superpositions of minimally entangled states, as reflected in the calculations of entanglement entropy and Wilson loop operators for the toric code. Overall, our results indicate that RNNs can represent phases of matter beyond the conventional Landau symmetry-breaking paradigm. ## II 2D RNNs Our main aim is to study topological properties of Hamiltonians using an RNN wave function ansatz. Since the quantum systems we study are stoquastic [51], we consider an ansatz with positive amplitudes to model the ground state wave function [31]. Complex extensions of RNN wave functions for non-stoquastic Hamiltonians have been explored in Refs. [31; 32]. To model a positive RNN wave function, we write our ansatz in the computational basis as: \[\Psi_{\mathbf{\theta}}(\mathbf{\sigma})=\sqrt{p_{\mathbf{\theta}}(\mathbf{\sigma})},\] where \(\mathbf{\theta}\) denotes the variational parameters of the ansatz \(|\Psi_{\mathbf{\theta}}\rangle\), and \(\mathbf{\sigma}=(\sigma_{1},\sigma_{2},\ldots,\sigma_{N})\) is a basis state configuration. A key characteristic of the RNN wave function is its ability to estimate observables with uncorrelated samples through autoregressive sampling [31; 52]. This is achieved by parameterizing the joint probability \(p_{\mathbf{\theta}}(\mathbf{\sigma})\) with its conditionals \(p_{\mathbf{\theta}}(\sigma_{i}|\sigma_{<i})\), through the probability chain rule \[p_{\mathbf{\theta}}(\mathbf{\sigma})=p_{\mathbf{\theta}}(\sigma_{1})p_{\mathbf{\theta}}( \sigma_{2}|\sigma_{1})\cdots p_{\mathbf{\theta}}(\sigma_{N}|\sigma_{N-1},\ldots, \sigma_{2},\sigma_{1}).\] The conditionals are given by \[p_{\mathbf{\theta}}(\sigma_{i}|\sigma_{<i})=\mathbf{y}_{i}\cdot\mathbf{\sigma}_{i},\] where \(\mathbf{y}_{i}=\mathrm{Softmax}(U\mathbf{h}_{i}+\mathbf{c})\) and '\(\cdot\)' is the dot product operation. The hidden state \(\mathbf{h}_{i}\) is calculated recursively as [49] \[\mathbf{h}_{i}=f\!(W[\mathbf{\sigma}_{i-1};\mathbf{h}_{i-1}]+\mathbf{b}), \tag{1}\] where the input \(\mathbf{\sigma}_{i-1}\) is a one-hot encoding of \(\sigma_{i-1}\) and the symbol \([\cdot;\cdot]\) corresponds to the concatenation of two vectors. Furthermore, \(U,W,\mathbf{b}\), and \(\mathbf{c}\) are learnable weights and biases and \(f\) is an activation function. The sequential operation of the RNN is shown in Fig. 1(a), where the RNN cell, i.e., the recurrent relation in Eq. (1), is depicted as a blue square. As each of the conditionals \(p_{\mathbf{\theta}}(\sigma_{i}|\sigma_{<i})\)[31] is normalized, the distribution \(p_{\mathbf{\theta}}\), and thus the quantum state \(|\Psi_{\mathbf{\theta}}\rangle\), are normalized. Notably, by virtue of the sequential structure built into the RNN ansatz, it is possible to obtain exact samples from \(p_{\mathbf{\theta}}\) by sampling the conditionals \(p_{\mathbf{\theta}}(\sigma_{i}|\sigma_{<i})\) sequentially as illustrated in Fig. 1(b). The sampling scheme is parallelizable and can produce fully uncorrelated samples distributed according to \(p_{\mathbf{\theta}}\) without the use of potentially slow Markov chains [31; 32]. As our aim is to study 2D quantum systems with periodic boundary conditions, we use 2D RNNs [31; 53], through the modification of the 1D relation in Eq. (1) to a recursion that encodes the 2D geometry of the lattice, i.e., \[\mathbf{h}_{i,j}=f\!\Big{(}W[\mathbf{\sigma}_{i-1,j};\mathbf{\sigma}_{i,j-1}; \mathbf{\sigma}_{\mathrm{mod}(i+1,L_{x}),j};\mathbf{\sigma}_{i,\mathrm{mod}(j+1,L_{y} )};\] \[\mathbf{h}_{i-1,j};\mathbf{h}_{i,j-1};\mathbf{h}_{\mathrm{mod}(i+1,L_{x}),j}; \mathbf{h}_{i,\mathrm{mod}(j+1,L_{y})}]+\mathbf{b}\Big{)}. \tag{2}\] \(L_{x},L_{y}\) are respectively the width and the length of the 2D lattice. In our study, we choose \(L_{x}=L_{y}=L\). Additionally, \(\mathbf{h}_{i,j}\) is a hidden state with two indices for each site in the 2D lattice, which is computed based on the inputs and the hidden states of the nearest neighboring sites. Since \(\mathbf{h}_{i,j}\) contains information about of the history of generated variables \(\sigma_{i,j}\), it can be used to compute the conditionals \[p_{\mathbf{\theta}}(\sigma_{i,j}|\sigma_{<i,j})=\mathrm{Softmax}(U\mathbf{h}_{i,j}+ \mathbf{c})\cdot\mathbf{\sigma}_{i,j}. \tag{3}\] The additional variables \(\mathbf{\sigma}_{\mathrm{mod}(i+1,L_{x}),j}\), \(\mathbf{\sigma}_{i,\mathrm{mod}(j+1,L_{y})}\) and hidden states \(\mathbf{h}_{\mathrm{mod}(i+1,L_{x}),j}\), \(\mathbf{h}_{i,\mathrm{mod}(j+1,L_{y})}\) allows to model systems with periodic boundary conditions such that the ansatz accounts for the correlations between physical degrees of freedom across the boundaries. This approach has been also suggested and implemented in Ref. [46]. We note that during the process of autoregressive sampling, if either of the input vectors, in Eq. (2), have not been encountered yet, we initialize them to a null vector so that we preserve the autoregressive nature of the RNN wave function, as illustrated in Fig. 1(b). Furthermore, Fig. 1(c) illustrates the autoregressive sampling path in 2D as well as how information is being transferred among RNN cells. Importantly, we use an advanced version of 2D RNNs which incorporates a gating mechanism as previously done in Refs. [31; 46; 54; 55]. Additional details can be found in App. A. We also note that implementing lattice symmetries in our RNN ansatz can be done to improve the variational accuracy as shown in Refs. [31; 55], however, we do not pursue this direction in our study. ### Supplementing RNNs optimization with annealing To train the parameters of the RNN, we minimize the energy expectation value \(E_{\mathbf{\theta}}=\bra{\Psi_{\mathbf{\theta}}}\hat{H}\ket{\Psi_{\mathbf{\theta}}}\) using Variational Monte Carlo (VMC) [56], where \(\hat{H}\) is a Hamiltonian of interest. In the presence of local minima in the optimization landscape of \(E_{\mathbf{\theta}}\), the VMC optimization may get stuck in a poor local optimum [57; 58]. To ameliorate this limitation, we supplement the VMC scheme with a pseudo-entropy whose objective is to help the optimization escape local minima [57; 59; 55; 57; 59]. The new objective function is defined as \[F_{\mathbf{\theta}}(n)=E_{\mathbf{\theta}}-T(n)S_{\mathrm{classical}}(p_{\mathbf{\theta}}), \tag{4}\] where \(F_{\mathbf{\theta}}\) is a variational pseudo-free energy. The Shannon entropy \(S_{\mathrm{classical}}\) of \(p_{\mathbf{\theta}}\left(\mathbf{\sigma}\right)\) is given by \[S_{\mathrm{classical}}(p_{\mathbf{\theta}})=-\sum_{\mathbf{\sigma}}p_{\mathbf{\theta}}( \mathbf{\sigma})\ln\left(p_{\mathbf{\theta}}(\mathbf{\sigma})\right), \tag{5}\] where the sum goes over all possible configurations \(\{\mathbf{\sigma}\}\) in the computational basis. The pseudo-entropy \(S_{\mathrm{classical}}\) and its gradients are evaluated by sampling the RNN wave function. Furthermore, \(T(n)\) is a pseudo-temperature that is annealed from some initial value \(T_{0}\) to zero as follows: \(T(n)=T_{0}(1-n/N_{\text{annealing}})\) where \(n\in[0,N_{\text{annealing}}]\) and \(N_{\text{annealing}}\) is the total number of annealing steps. This scheme is inspired by the regularized variational quantum annealing scheme in Refs. [32; 55; 57]. More details about our training scheme are given in App. B. We also provide the hyperparameters in App. C. ### Topological entanglement entropy A powerful tool to probe topologically ordered states of matter is through the so-called topological entanglement entropy (TEE) [61; 62; 63; 10; 11; 12; 13; 64]. The TEE can be extracted by computing the entanglement entropy of a spatial bipartition of the system into \(A\) and \(B\), which together comprise the full system. For many phases of 2D matter, the Renyi-\(n\) entropy \(S_{n}(A)\equiv\frac{1}{1-n}\ln(\text{Tr}(\rho_{A}^{n}))\) satisfies the area law \(S_{n}(A)=aL-\gamma+\mathcal{O}(L^{-1})\). Here \(L\) is the size of the boundary between \(A\) and \(B\), \(\rho_{A}=\text{Tr}_{B}|\Psi\rangle\langle\Psi|\) is the reduced density matrix of subsystem \(A\), \(|\Psi\rangle\) is the state of the system, and \(\gamma\) is the TEE. The latter detects non-local correlations in the ground state wave function and plays the role of an order parameter for topological phases similar to the notion of a local order parameter in phases displaying long-range order. Interestingly, a measure of specific non-zero values of \(\gamma\) can be a clear sign of the existence of a topological order in a system of interest. Additionally, since the TEE is shown to be independent of the choice of Renyi index \(n\) for a contractible region \(A\)[62], we can use the swap trick [65] with our RNN wave function ansatz [66; 31] to calculate the second Renyi entropy \(S_{2}\) and extract the TEE \(\gamma\). To access the TEE \(\gamma\), we can approximate the ground state of the system using an RNN wave function ansatz, i.e. \(|\Psi_{\mathbf{\theta}}\rangle\approx|\Psi\rangle\) for different system sizes followed by a finite-size scaling analysis of the second Renyi entropy. We can also make use of a TEE construction, e.g., the Kitaev-Preskill construction [10]. The Kitaev-Preskill construction prescribes dividing the system into four subregions \(A\), \(B\), \(C\), and \(D\) as illustrated in Fig. 2. The TEE Figure 1: (a) An illustration of the positive RNN wave function. Each RNN cell (blue squares) receives an input \(\mathbf{\sigma}_{n-1}\) and a hidden state \(\mathbf{h}_{n-1}\) and outputs a new hidden state \(\mathbf{h}_{n}\). The latter is fed to a Softmax layer (denoted by S, red circle) that outputs a conditional probability \(P_{i}\) (orange circle). The conditional probabilities are multiplied after taking a square root to obtain the wave function \(\Psi_{\mathbf{\theta}}(\mathbf{\sigma})\) (pink circle). (b) Illustration of the sampling scheme for an RNN wave function. After obtaining the probability vector \(\mathbf{y}_{i}\) from the Softmax layer (S) at step \(i\), we sample it to produce \(\mathbf{\sigma}_{i}\) which is fed again to the RNN with the hidden state \(\mathbf{h}_{i}\) to produce the next configuration \(\mathbf{\sigma}_{i+1}\).(c) A 2D RNN with periodic boundary conditions. A bulk RNN cell receives two hidden states \(\mathbf{h}_{i,j-1}\) and \(\mathbf{h}_{i-1,j}\), as well as two input vectors \(\mathbf{\sigma}_{i,j-1}\) and \(\mathbf{\sigma}_{i-1,j}\) (not shown) illustrated by the black solid arrows. To handle periodic boundary conditions, RNN cells at the boundary receive an additional \(\mathbf{h}_{i,\text{mod}(j+1,L_{y})}\) and \(\mathbf{h}_{\text{mod}(i+1,L_{x}),j}\), as well as two input vectors \(\mathbf{\sigma}_{i,\text{mod}(j+1,L_{y})}\) and \(\mathbf{\sigma}_{\text{mod}(i+1,L_{x}),j}\) (not shown) illustrated by green solid arrows. The sampling path is illustrated with red dashed arrows. The initial memory state \(\mathbf{h}_{0}\) of the 2D RNN and the initial inputs \(\mathbf{\sigma}_{0}\) (not shown) are taken as null vectors. Figure 2: A sketch of the parts \(A\), \(B\) and \(C\) that we use for Kitaev-Preskill construction to compute the TEE in a system of interest. computing \[\gamma =-S_{2}(A)-S_{2}(B)-S_{2}(C)+S_{2}(AB)\] \[\quad+S_{2}(AC)+S_{2}(BC)-S_{2}(ABC),\] where \(S_{2}(A)\) is the second Renyi entropy of the subsystem \(A\), and \(AB\) is the union of \(A\) and \(B\) and similarly for the other terms. Finite-size effects on \(\gamma\) can be alleviated by increasing the size of the subregions \(A,B\) and \(C\)[10, 67]. Finally, we highlight the ability of the RNN wave function to study systems with fully periodic boundary conditions as a strategy to mitigate boundary effects, as opposed to cylinders used in DMRG [68, 69], which may potentially introduce edge effects that can affect the values of the TEE [70]. ## III Results ### The toric code We now focus our attention on the toric code Hamiltonian which is the simplest model that hosts a \(Z_{2}\) topological order [6, 61] and has a non-zero TEE equal to \(\gamma=\ln(2)\). The Hamiltonian is defined in terms of spin-1/2 degrees of freedom located on the edges of a square lattice (see Fig. 3(a)) and is given by \[\hat{H}=-\sum_{p}\prod_{i\in p}\tilde{\sigma}_{i}^{z}-\sum_{v}\prod_{i\in v} \hat{\sigma}_{i}^{x},\] where \(\hat{\sigma}_{i}^{x,z}\) are Pauli matrices. Additionally, the first summation is on the plaquettes and the second summation is on the vertices of the lattice [61]. Note that the lattice in Fig. 3(a) can be seen as a square lattice with a unit cell containing two spins. In our simulations, we use an \(L\times L\times 2\) array of spins where \(L\) is the number of plaquettes on each side of the underlying square lattice. It is possible to study the toric code with a 2D RNN defined on a primitive square lattice by merging the two spin degrees of freedom of the unit cell of the toric code into a single "patch" followed by an enlargement of the local Hilbert space dimension in the RNN from 2 to 4. This idea is illustrated in Fig. 3(a) and is similar in spirit to how the local Hilbert space is enlarged in DMRG to study quasi-1D systems [71]. We provide additional details about the mapping in App. A. To extract the TEE from our ansatz, we variationally optimize the 2D RNN wave function targetting the ground state of this model for multiple system sizes on a square lattice with periodic boundary conditions. After the optimization, we compute the TEE using system size extrapolation and using the Kitaev-Preskill scheme provided in Sec. II.2. More details about the regions chosen for this construction are provided in App. D. To avoid local minima during the variational optimization, we perform an initial annealing phase as described in Sec. II.1 (see additional details in App. C). The results shown in Fig. 4(a) suggest that our 2D RNN wave function can describe states with an area law scaling in 2D. Linearized versions of the RNN wave function have been recently shown to display an entanglement area law [72]. For \(L=10\) (not included in the extrapolations in Fig. 4(a)), it is challenging to evaluate \(S_{2}\) accurately as the expectation value of the swap operator is proportional to \(\exp\left(-S_{2}\right)\), which becomes very small and is hard resolve accurately via sampling the RNN wave function. The improved ratio trick is an interesting alternative for enhancing the accuracy of our estimates [73, 65]. The use of conditional sampling is also another possibility for enhancing the accuracy of our measurements [66]. The extrapolation confirms the existence of a non-zero TEE whose value is close to \(\gamma^{\prime}=\ln(2)\) within error bars. Note that the sub-region we use to compute the TEE is half of the torus, namely a cylinder with two disconnected boundaries [74]. As shown in Ref. [63], the use of this geometry means that the expected TEE becomes state-dependent and given by \[\gamma^{\prime}=2\gamma+\ln\left(\sum_{i}\frac{p_{i}^{2}}{d_{i}^{2}}\right) \tag{6}\] for the second Renyi entropy. Here \(d_{i}\geq 1\) is the quantum dimension of an \(i\)-th quasi-particle. For the toric code, we have abelian anyons with \(d_{i}=1\) for \(i=1,2,3,4\). Additionally \(p_{i}=|\alpha_{i}|^{2}\) is the overlap of the computed ground state \(|\Psi\rangle\) with the \(i\)-th minimally entangled state (MES) \(|\Xi_{i}\rangle\) where \[|\Psi\rangle=\sum_{i}\alpha_{i}\left|\Xi_{i}\right\rangle.\] The observations above and the numerical result \(\gamma_{\text{RNN}}\approx\ln(2)\) suggest that the RNN wave functions optimized via gradient descent and annealing find a superposition of MES, as opposed to DMRG which preferentially collapses to a single MES for relatively low bond dimensions. For relatively large bond dimensions a superposition of Figure 3: Mapping of 2D toric code lattice and kagome lattice to a square lattice that can be handled by a 2D RNN wave function. For the toric code lattice in panel (a), every two sites inside the dashed green ellipses are merged. For the kagome lattice in panel (c), every three sites in a unit cell enclosed by the dashed green circles are combined. MES can be recovered in a DMRG simulation [15; 16]. The analysis provided in App. E demonstrates that our optimized RNN ansatz finds a uniform superposition of two MES which increases the entanglement in the state with respect to a single MES. Thus using Eq. (6), we expect \(\gamma^{\prime}=2\ln(2)+\ln\left(\frac{1}{4}+\frac{1}{4}\right)=\ln(2)\), which is consistent with our numerical observations. We note that the exact autoregressive sampling procedure plays a key role in the ability of our RNN ansatz to sample a superposition of different topological sectors when this superposition is encoded in our ansatz. For wave functions representing the ground state of the toric code used in combination with Markov-chain Monte Carlo methods, the probability of sampling different topological sectors of the state is exponentially suppressed even if the exact wave function ansatz encodes different topological sectors. This observation can be illustrated using an exact convolutional neural network construction of the toric code ground state which contains an equal superposition of different topological sectors [17]. Although in principle such representation contains all topological sectors, its form is not amenable to exact sampling and uses Markov chains so that upon sampling with local moves the system chooses a fixed topological sector. To further verify that our 2D RNN wave function can extract the correct TEE of the 2D toric code, we compute the TEE using the Preskill-Kitaev construction, that has contractible surfaces, and for which the TEE does not depend on the topological sector superposition [63; 13] (see App. D for details about the construction). The results reported in Fig. 4(b) demonstrate an excellent agreement between the TEE extracted by our RNN and the expected theoretical value for the toric code. ### Bose-Hubbard model on kagome lattice We now turn our attention to a hard-core Bose-Hubbard model on the Kagome lattice, which has been shown to host topological order [50; 12; 14]. The Hamiltonian of this model is given by \[\hat{H}=-t\sum_{\langle i,j\rangle}\left(b_{i}^{\dagger}b_{j}+b_{i}b_{j}^{ \dagger}\right)+V\sum_{\hexagon{O}}n_{\hexagon{O}}^{2}, \tag{7}\] where \(b_{i}\) (\(b_{i}^{\dagger}\)) is the annihilation (creation) operator. Furthermore, \(t\) is the kinetic strength, \(V\) is a tunable interaction strength and \(n_{\hexagon{O}}=\sum_{i\in\hexagon{O}}(n_{i}-1/2)\). The first term corresponds to a kinetic term that favors hopping between nearest neighbors, whereas the second term promotes an occupation of three hard-core bosons in each hexagon of the kagome lattice. In our setup, we choose \(V\) in units of the kinetic term strength \(t\). The atom configurations of this model correspond to an \(L\times L\times 3\) array of binary degrees of freedom where \(L\) is the size of each side of the kagome lattice. Following an analogous approach to the toric code, we combine three sites of the unit cell of the kagome lattice as input to the 2D RNN cell, as illustrated in Fig. 3(b). This allows us to map our kagome lattice with a local Hilbert space of 2 to a square lattice with an enlarged Hilbert space of size \(2^{3}=8\). The model is known to host a \(Z_{2}\) spin-liquid phase for \(V\gtrsim 7\)[12; 14; 75]. To confirm this finding, we estimate \(\gamma\) for the system sizes \(6\times 6\times 3\) and \(8\times 8\times 3\). We use the Kitaev-Preskill construction [10]. The details of the construction of the regions \(A,B\) and \(C\) are provided in App. D. As the Hamiltonian in Eq. 7 has a \(U(1)\) symmetry associated with the conservation of bosons in the system, we impose this symmetry on our RNN wave function [31]. We also supplement the VMC optimization with annealing to overcome local minima as previously done for the 2D toric code (see App. B). For the system size \(8\times 8\times 3\), the RNN ansatz parameters were initialized using the optimized parameters of the system Figure 4: Entanglement properties of the 2D toric code. (a) Second Renyi entropy scaling is computed using our RNN wave function on the 2D toric code for different lengths \(L\) where the total system size is given as \(L\times L\times 2\). (b) TEE computed with the Kitaev-Preskill construction (see App. D). The values found by the RNN are very close to \(\ln(2)\). Error bars correspond to one standard deviation and are smaller than the symbol size. size \(6\times 6\times 3\) (see details about the hyperparameters in App. C). This pre-training technique has been motivated in Refs. [32, 46, 55]. The results are provided in Fig. 5. The computed TEEs for \(L=6,8\) show a saturation of \(\gamma_{\text{RNN}}\) for large values of the interaction strength \(V\). We observe that the saturation values of \(\gamma_{\text{RNN}}\) are in good agreement with the expected TEE \(\gamma=\ln(2)\) of a \(Z_{2}\) spin-liquid [12]. Additionally, the negative values of \(\gamma_{\text{RNN}}\) observed for \(V\leq 6\) in the superfluid phase [12] may be related to the presence of Goldstone modes that manifest themselves as corrections to the area law in the entanglement entropy and can be seen as a negative contribution to the TEE [76]. We note that the QMC methods are capable of obtaining a consistent value with the exact TEE for this model at \(V=8\) for very large system sizes [14] using finite-size extrapolation. This observation suggests that our RNN ansatz is still limited by finite-size effects at \(V=8\) (see Fig. 5) for which the TEE is not yet saturated to \(\ln 2\). Other sources of error in our calculation may be due to inaccuracies in the variational calculations and statistical errors due to the sampling. However, we note that our variational calculation is performed at zero temperature, which makes our calculations insensitive to temperature effects as opposed to QMC [12]. ## IV Conclusions and Outlooks We have demonstrated a successful application of neural network wave functions to the task of detecting topological order in quantum systems. In particular we use RNNs borrowed from natural language processing as ansatz wave functions. RNNs enjoy the autoregressive property which allows to sample uncorrelated configurations. They are also capable of estimating second Renyi entropies using the swap trick [31] with which we computed TEEs using finite-size scaling and Kitaev-Preskill constructions [10]. Furthermore, the structural flexibility of the RNN offers the possibility to handle a wide variety of geometries including periodic boundary conditions in any spatial dimension which alleviate boundary effects on the TEE. We have empirically demonstrated that 2D RNN wave functions support the 2D area law and can find a non-zero TEE for the toric code and for the hard-core Bose-Hubbard model on the Kagome lattice. We also find that RNNs favor coherent superpositions of minimally-entangled states over minimally-entangled states themselves. The success of our numerical experiments hinges on the combination of the exact sampling strategy used to compute observables, the structural properties of the RNN wave function, and the use of annealing as a strategy to overcome local minima during the optimization procedure. The accuracy improvement of our findings can be achieved through the use of more advanced versions of RNNs and autoregressive models in general [72, 55], or even a hybrid approach that combines QMC and RNNs [77, 78]. Similarly, the incorporation of lattice symmetries provides a strategy to enhance the accuracy of our calculations [31, 55, 79]. Although our results match the anticipated behaviour of the toric code and Bose-Hubbard spin liquid models, we highlight that the RNN wave function may be susceptible to spurious contributions to the TEE [64] and we have not addressed this issue in our work. Finally, our methods can be applied to study other systems displaying topological order, such as the Rydberg atoms array [3, 80, 81], either through variational methods or in combination with experimental data. To experimentally study topological order, it is possible to use quantum state tomography with RNNs [22]. This involves using experimental data to reconstruct the state seen in the experiment followed by an estimation of the TEE using the methods outlined in our work. Overall, our findings suggest that RNN wave function ansatzes have promising potential for discovering new phases of matter with topological order. ###### Acknowledgements. We would like to thank Giacomo Torlai, Jeremy Cote, Schuyler Moss, Roeland Wiersema, Ejaaz Merali, Isaac De Vlugt and Arun Paramekanti for helpful discussions. Our RNN implementation is based on Tensorflow [82] and NumPy [83]. Computer simulations were made possible thanks to the Vector Institute computing cluster and Compute Canada. M.H acknowledges support from Mitacs Atecelerate. J.C. acknowledges support from Natural Sciences and Engineering Research Council of Canada (NSERC), the Shared Hierarchical Figure 5: A plot of the topological entanglement entropy against the interaction strength \(V\) (in units of \(t\)) of Hard-core Bose-Hubbard model on kagome lattice for system sizes \(N=L\times L\times 3\) where \(L=6,8\). The calculations were performed using the Kitaev-Preskill construction (see App. D). The continuous black horizontal line corresponds to a zero TEE, and the dashed blue horizontal line for a \(\ln(2)\) TEE. Academic Research Computing Network (SHARCNET), Compute Canada, and the Canadian Institute for Advanced Research (CIFAR) AI chair program. Research at Perimeter Institute is supported in part by the Government of Canada through the Department of Innovation, Science and Economic Development Canada and by the Province of Ontario through the Ministry of Economic Development, Job Creation and Trade. ## Appendix A 2D periodic gated RNNs In this appendix, we describe our implementation of a 2D gated RNN wave function for periodic systems, that can be used to approximate the ground states of the Hamiltonians considered in this paper. If we define \[\mathbf{h}^{\prime}_{i,j} =[\mathbf{h}_{i-1,j};\mathbf{h}_{i,j-1};\mathbf{h}_{\mathrm{mod}(i+1,L_{x}), j};\mathbf{h}_{i,\mathrm{mod}(j+1,L_{y})}],\] \[\mathbf{\sigma}^{\prime}_{i,j} =[\mathbf{\sigma}_{i-1,j};\mathbf{\sigma}_{i,j-1};\mathbf{\sigma}_{\mathrm{ mod}(i+1,L_{x}),j};\mathbf{\sigma}_{i,\mathrm{mod}(j+1,L_{y})}],\] then our gated 2D RNN wave function ansatz is based on the following recursion relations: \[\tilde{\mathbf{h}}_{i,j} =\tanh\Bigl{(}W[\mathbf{\sigma}^{\prime}_{i,j};\mathbf{h}^{\prime}_{i,j}] +\mathbf{b}\Bigr{)},\] \[\mathbf{u}_{i,j} =\mathrm{sigmoid}\Bigl{(}W_{g}[\mathbf{\sigma}^{\prime}_{i,j};\mathbf{h}^ {\prime}_{i,j}]+\mathbf{b}_{g}\Bigr{)},\] \[\mathbf{h}_{i,j} =\mathbf{u}_{i,j}\odot\tilde{\mathbf{h}}_{i,j}+(1-\mathbf{u}_{i,j})\odot(U \mathbf{h}^{\prime}_{i,j}).\] Here '\(\odot\)' is the Hadamard (element-wise) product. A hidden state \(\mathbf{h}_{i,j}\) can be obtained by combining a candidate state \(\tilde{\mathbf{h}}_{i,j}\) and the neighbouring hidden states \(\mathbf{h}_{i-1,j},\mathbf{h}_{i,j-1},\mathbf{h}_{\mathrm{mod}(i+1,L_{x}),j},\mathbf{h}_{i, \mathrm{mod}(j+1,L_{y})}\). The update \(\mathbf{u}_{i,j}\) determines how much of the candidate hidden state \(\tilde{\mathbf{h}}_{i,j}\) will be taken into account and how much of the neighboring states will be considered. With this combination, it is possible to circumvent some limitations of the vanishing gradient problems [84; 85]. The weight matrices \(W,W_{g},U\) and the biases \(b,b_{g}\) are variational parameters of our RNN ansatz. The size of the hidden state \(\mathbf{h}_{i,j}\) is a hyperparameter called the number of memory units or the hidden dimension and it is denoted as \(d_{h}\). Finally, to motivate the use of the gating mechanism, we highlight that a gated 2D RNN was found to be better than a non-gated 2D RNN for the task of finding the ground state of a 2D Heisenberg model [55]. Since we use an enlarged Hilbert space in our 2D RNN, we take \(\mathbf{\sigma}_{ij}\) to be the concatenation of the one-hot encodings of the binary physical variables in each unit cell. In this case, if \(m\) is the number of physical variables per unit cell, then \(\mathbf{\sigma}_{ij}\) has size \(2m\). We also note that the size of the Softmax layer is taken as \(2^{m}\) so that we can autoregressively sample each unit cell variables at once. ## Appendix B Variational monte carlo and variance reduction To optimize the energy expectation value of our RNN wave function \(\ket{\Psi_{\mathbf{\theta}}}\), we use the Variational Monte Carlo (VMC) scheme, which consists of using importance sampling to estimate the expectation value of the energy \(E_{\mathbf{\theta}}=\bra{\Psi_{\mathbf{\theta}}}\hat{H}\ket{\Psi_{\mathbf{\theta}}}\) as follows [56; 31]: \[E_{\mathbf{\theta}}=\frac{1}{M}\sum_{i=1}^{M}E_{\mathrm{loc}}(\mathbf{\sigma^{(i)}}),\] where the local energies \(E_{\mathrm{loc}}\) are defined as \[E_{\mathrm{loc}}(\mathbf{\sigma})=\sum_{\mathbf{\sigma}^{\prime}}H_{\mathbf{\sigma}\mathbf{ \sigma}^{\prime}}\frac{\Psi_{\mathbf{\theta}}(\mathbf{\sigma}^{\prime})}{\Psi_{\mathbf{ \theta}}(\mathbf{\sigma})}.\] Here the configurations \(\{\mathbf{\sigma^{(i)}}\}_{i=1}^{M}\) are sampled from our ansatz using autoregressive sampling. The choice of \(M\) is a hyperparameter that can be tuned. Furthermore, \(E_{\mathrm{loc}}(\mathbf{\sigma})\) can be efficiently computed for local Hamiltonians. Furthermore, the gradients can be estimated as [31] \[\partial_{\mathbf{\theta}}E_{\mathbf{\theta}}=\frac{2}{M}\mathfrak{Re}\left(\sum_{i=1} ^{M}\partial_{\mathbf{\theta}}\log\left(\Psi_{\mathbf{\theta}}^{*}(\mathbf{\sigma^{(i)}}) \right)\left(E_{\mathrm{loc}}(\mathbf{\sigma^{(i)}})-E_{\mathbf{\theta}}\right)\right).\] For a stoquastic Hamiltonian \(\hat{H}\)[51], we can use a positive RNN wave function where the use of the real part \(\mathfrak{Re}\) is not necessary. Importantly, subtracting the variational energy \(E_{\mathbf{\theta}}\) is helpful to achieve convergence as it reduces the variance of the gradients near convergence without biasing its expectation value as shown in Ref. [31]. The subtracted term is referred to as a baseline, which is typically used for the same purpose in the context of Reinforcement learning [86]. To demonstrate the noise reduction more rigorously compared to the intuition provided in Ref. [31], let us focus on the variance of the gradient with respect to a parameter \(\theta\) in the set of the variational parameters \(\mathbf{\theta}\), after subtracting the baseline. Here we focus on the case of a positive ansatz wave function \(\Psi_{\mathbf{\theta}}(\mathbf{\sigma})=\sqrt{P_{\mathbf{\theta}}(\mathbf{\sigma})}\) that we used in our study. First of all, we define: \[O_{\theta}(\mathbf{\sigma})\equiv\partial_{\theta}\log\left(\Psi_{\mathbf{\theta}}^{*} (\mathbf{\sigma})\right)=\frac{1}{2}\partial_{\theta}\log(P_{\mathbf{\theta}}(\mathbf{ \sigma})).\] Thus, the gradient with a baseline can be written as: \[\partial_{\theta}E_{\mathbf{\theta}}=2\bra{O_{\theta}(\mathbf{\sigma})\overline{E}_{ \mathrm{loc}}(\mathbf{\sigma})},\] where \(\overline{E}_{\mathrm{loc}}(\mathbf{\sigma})\equiv E_{\mathrm{loc}}(\mathbf{\sigma})- E_{\mathbf{\theta}}\) and \(\langle.\rangle\) denotes an expectation value over the Born distribution \(\ket{\Psi_{\mathbf{\theta}}(\mathbf{\sigma})}^{2}\). To estimate the gradients' noise, we look at the variance of the gradient estimator, which can be decomposed as follows: \[\mathrm{Var}(O_{\theta}\overline{E}_{\mathrm{loc}}) =\mathrm{Var}(O_{\theta}E_{\mathrm{loc}})\] \[-2\mathrm{Cov}(O_{\theta}E_{\mathrm{loc}},O_{\theta}E_{\mathbf{ \theta}})+E_{\mathbf{\theta}}^{2}\mathrm{Var}(O_{\theta}).\] Thus the variance reduction \(R\), after subtracting the baseline, is given as: \[R \equiv\mathrm{Var}(O_{\theta}\overline{E}_{\mathrm{loc}})-\mathrm{ Var}(O_{\theta}E_{\mathrm{loc}}),\] \[=-2E_{\mathbf{\theta}}\mathrm{Cov}(O_{\theta}E_{\mathrm{loc}},O_{ \theta})+E_{\mathbf{\theta}}^{2}\mathrm{Var}(O_{\theta}).\] Since the gradients' magnitude tends to near-zero values close to convergence, statistical errors are more likely to make the VMC optimization more challenging. We focus on this regime for this derivation to show the importance of the baseline in reducing noise. Thus, we assume that \(E_{\text{loc}}(\mathbf{\sigma})=E_{\mathbf{\theta}}+\xi(\mathbf{\sigma})\), where the supremum of the local energies fluctuations is much smaller compared to the variational energy, i.e., \((\sup_{\mathbf{\sigma}}|\xi(\mathbf{\sigma})|)\ll E_{\mathbf{\theta}}\). From this assumption, we can deduce that: \[R =-2E_{\mathbf{\theta}}^{2}\text{Cov}(O_{\theta},O_{\theta}) \tag{11}\] \[-2E_{\mathbf{\theta}}\text{Cov}(O_{\theta}\xi,O_{\theta})+E_{\mathbf{ \theta}}^{2}\text{Var}(O_{\theta}),\] (12) \[=-E_{\mathbf{\theta}}^{2}\text{Var}(O_{\theta})-2E_{\mathbf{\theta}} \text{Cov}(O_{\theta}\xi,O_{\theta}). \tag{13}\] The second term can be decomposed as follows: \[\text{Cov}(O_{\theta}\xi,O_{\theta})=\langle O_{\theta}^{2}\xi \rangle-\langle O_{\theta}\xi\rangle\langle O_{\theta}\rangle. \tag{14}\] Since \(\langle O_{\theta}\rangle=\frac{1}{2}\langle\partial_{\theta}\log(P_{\mathbf{ \theta}})\rangle=0\)[86; 87; 31], then we can bound the covariance term from above as: \[\text{Cov}(O_{\theta}\xi,O_{\theta}) \leq\left(\sup_{\mathbf{\sigma}}|\xi(\mathbf{\sigma})|\right)\langle O_{ \theta}^{2}\rangle,\] \[=\left(\sup_{\mathbf{\sigma}}|\xi(\mathbf{\sigma})|\right)\text{Var}(O_{ \theta}),\] \[\ll E_{\mathbf{\theta}}\text{Var}(O_{\theta}).\] Thus, we can conclude that the variance reduction \(R\) in Eq. (13) is negative. This observation highlights the importance of the baseline in reducing the statistical noise of the energy gradients near convergence. For a complex ansatz wave function \(\Psi_{\mathbf{\theta}}(\mathbf{\sigma})=\sqrt{P_{\mathbf{\theta}}(\mathbf{\sigma})}\exp(\text {i}\phi_{\mathbf{\theta}}(\mathbf{\sigma}))\), the expectation value \(\langle O_{\theta}\rangle=\frac{1}{2}\langle\partial_{\theta}\log(P_{\mathbf{ \theta}})\rangle-\text{i}\langle\partial_{\theta}\phi_{\mathbf{\theta}}\rangle=- \text{i}\langle\partial_{\theta}\phi_{\mathbf{\theta}}\rangle\) is no longer equal zero in general. We leave the investigation of this case for future studies. Similarly to the stochastic estimation of the variational energy using our ansatz, we can do the same for the estimation of the variational pseudo free energy \(F_{\mathbf{\theta}}\) in Eq. (4). More details can be found in the supplementary information of Ref. [57]. Finally, we note that the gradient steps in our numerical simulations are performed using Adam optimizer [88]. ## Appendix C Hyperparameters For all models studied in this paper, we note that for each annealing step, we perform \(N_{\text{train}}=5\) gradient steps. Concerning the learning rate \(\eta\), we choose \(\eta=10^{-3}\) during the warmup phase and the annealing phase and we switch to a learning rate \(\eta=10^{-4}\) in the convergence phase. We finally note that we set the number of convergence steps as \(N_{\text{convergence}}=10000\). In Tab. 1, we provide further details about the hyperparameters we choose in our study for the different models. The meaning of each hyperparameter related to annealing is discussed in detail in Refs. [55; 57]. Additionally, we use \(M_{e}=2\times 10^{6}\) samples for the estimation of the entanglement entropy along with their error bars for the toric code. For the Bose-Hubbard model we use \(M_{e}=10^{7}\) samples to reduce the error bars on the TEE in Fig. 5. To estimate the TEE uncertainty from the Kitaev-Preskill construction, we use the standard deviation expression of the sum of independent random variables [89]. Finally, we note that to avoid fine-tuning the learning rate for each value of \(V\) (between 4 and 13) in the Bose-Hubbard model, we target the normalized Hamiltonian \[\hat{H}=-\frac{1}{V}\sum_{\langle i,j\rangle}\left(b_{i}^{\dagger}b_{j}+b_{i} b_{j}^{\dagger}\right)+\sum_{\bigcirc}n_{\bigcirc}^{2} \tag{15}\] in our numerical experiments. ## Appendix D Kitaev-Preskill constructions In this appendix, we provide details about the subregions used to calculate the TEE using the Kitaev-Preskill construction (see Sec. II.2). For the 2D toric code, we use three spins for each subregion and for the Bose-Hubbard model in the Kagome lattice we increase the subregions sizes to mitigate finite size effects [10] as opposed to the 2D toric code that does not suffer from this limitation [11]. The illustrations of these subregions are provided in Fig. 6. ## Appendix E RNNs and MES The results in Sec. III.1 indicate that the RNN wave function encodes a superposition of minimally entangled states (MES). Here we further investigate this statement by analyzing the expectation values of the average Wilson loop operators and the average 't Hooft loop operators. We define the average Wilson loop operators as \[\hat{W}_{d}^{z}=\frac{1}{L}\left(\sum_{\mathcal{C}_{d}}\prod_{\sigma_{j}\in \mathcal{C}_{d}}\hat{\sigma}_{j}^{z}\right). \tag{16}\] Here \(d=h,v\) and \(\mathcal{C}_{h},\mathcal{C}_{v}\) are closed non-contractible loops illustrated in Fig. 7. A set of degenerate ground states of the toric code are eigenstates of the operators \(\hat{W}_{h}^{z}\), \(\hat{W}_{v}^{z}\) with eigenvalues \(\pm 1\). Additionally, the two eigenvalues uniquely determine the topological sector of the ground state. In this case, the topological ground states can be labeled as \(|\xi_{ab}\rangle\) with \(a,b=0,1\)[63; 90]. We can also define the average 't Hooft loop operators on non-contractible closed loops [90], such that \[\hat{W}_{d}^{x}=\frac{1}{L}\left(\sum_{\mathcal{C}_{d}}\prod_{\sigma_{j}\in \mathcal{C}_{d}}\hat{\sigma}_{j}^{x}\right), \tag{17}\] where \(d=h,v\) and \(\tilde{\mathcal{C}}_{h}\), \(\tilde{\mathcal{C}}_{v}\) correspond to horizontal and vertical loops as illustrated in Fig. 7. These operators satisfy the anti-commutation relations \(\{\hat{W}_{h}^{z},\hat{W}_{v}^{x}\}=0\) and \(\{\hat{W}_{v}^{z},\hat{W}_{h}^{x}\}=0\). From the optimized RNN wave function (\(L=8\)), we find \(\langle\hat{W}_{h}^{z}\rangle=0.0009(2)\) and \(\langle\hat{W}_{v}^{z}\rangle=-0.0039(2)\) which are consistent with vanishing expectation values. We also obtain \(\langle\hat{W}_{h}^{x}\rangle=0.999847(5)\) and \(\langle\hat{W}_{v}^{x}\rangle=0.999785(5)\) for the 't Hooft loop operators, which are consistent with \(+1\) expectation values. These results are in part due to the use of a positive RNN wave function which forces the expectation values \(\langle\hat{W}_{h}^{x}\rangle\) and \(\langle\hat{W}_{v}^{x}\rangle\) to strictly positive values and rules out the possibility to obtain, e.g., \(\langle\hat{W}_{h}^{x}\rangle=-1\). By expanding the optimized RNN wave function in the \(\ket{\xi_{ab}}\) basis, where \(a,b\) are binary variables, we obtain \[\ket{\Psi_{\text{RNN}}}\approx\sum_{ab}c_{ab}\ket{\xi_{ab}}.\] Here \(\{\ket{\xi_{ab}}\}\) correspond to the four topological sectors and they are mutually orthogonal. \(c_{ab}\) are real numbers since we use a positive RNN wave function. Additionally, the basis states \(\ket{\xi_{ab}}\) satisfy \[\hat{W}_{h}^{z}\ket{\xi_{ab}} =(-1)^{a}\ket{\xi_{ab}},\] \[\hat{W}_{v}^{z}\ket{\xi_{ab}} =(-1)^{b}\ket{\xi_{ab}}.\] \begin{table} \begin{tabular}{|c|c|c|} \hline Figures & Parameter & Value \\ \hline \multirow{3}{*}{2D toric code} & Number of memory units & \(d_{h}=60\) \\ & Number of samples & \(M=100\) \\ & Initial pseudo-temperature & \(T_{0}=2\) \\ & Number of annealing steps & \(N_{\text{anneasing}}=4000\) \\ \hline \multirow{3}{*}{Bose-Hubbard model (L = 6)} & Number of memory units & \(d_{h}=100\) \\ & Number of samples & \(M=500\) \\ \cline{1-1} & Pseudo-temperature & \(T_{0}=0\) \\ \cline{1-1} & Number of steps & \(10000\) \\ \hline \end{tabular} \end{table} Table 1: A summary of the hyperparameters used to obtain the results reported in this paper. Note that the number of samples \(M\) corresponds to the batch size used during the training phase. Additionally, the \(L=8\) RNN model is initialized with the \(L=6\) optimized RNN for the Bose-Hubbard model. Figure 6: An illustration of the sub-regions \(A,B,C\) chosen for the Kitaev-Preskill constructions. (a) For the 2D toric code, each sub-region has 3 spins. We also do the same for the lattice sizes \(L=6,8,10\) in Fig. 4(b). For the hard-core Bose-Hubbard model on the Kagome lattice, we target two different system sizes. Panel (b) shows the construction for L = 6 and panel (c) provides the construction for \(L=8\). In panels (b) and (c), each site corresponds to a block of three bosons. Figure 7: An illustration of the vertical and the horizontal loops used to compute the Wilson loop operators (see Eq. 1) and the ’t Hooft loop operators (see Eq. 2). From the anti-commutation relations, we can show that: \[\hat{W}_{h}^{x}\ket{\xi_{ab}} =\ket{\xi_{\text{a}\!\text{b}}},\] \[\hat{W}_{v}^{x}\ket{\xi_{ab}} =\ket{\xi_{\text{a}\!\text{b}}},\] where \(\bar{a}=1-a\) and \(\bar{b}=1-b\). By plugging the last two equations in the \(\hat{W}_{h}^{x}\), \(\hat{W}_{v}^{x}\) expectation values of our optimized RNN wave function, we obtain: \[2c_{00}c_{01}+2c_{10}c_{11} \approx 1,\] \[2c_{00}c_{10}+2c_{01}c_{11} \approx 1.\] From the normalization constraint \(1=\sum_{ab}c_{ab}^{2}\), we deduce that: \[(c_{00}-c_{01})^{2}+(c_{10}-c_{11})^{2} \approx 0,\] \[(c_{00}-c_{10})^{2}+(c_{01}-c_{11})^{2} \approx 0.\] As a consequence, we conclude that \(c_{00}\approx c_{01}\approx c_{10}\approx c_{11}\), which means that the optimized RNN wave function is approximately a uniform superposition of the four topological ground states \(\ket{\xi_{ab}}\). This observation is also consistent with vanishing expectation values of the operators \(\hat{W}_{h}^{z}\), \(\hat{W}_{v}^{z}\). Furthermore, from Ref. [63] the MES of the toric code are given as follows: \[\ket{\Xi_{1}} =\frac{1}{\sqrt{2}}\left(\ket{\xi_{00}}+\ket{\xi_{01}}\right),\] \[\ket{\Xi_{2}} =\frac{1}{\sqrt{2}}\left(\ket{\xi_{00}}-\ket{\xi_{01}}\right),\] \[\ket{\Xi_{3}} =\frac{1}{\sqrt{2}}\left(\ket{\xi_{10}}+\ket{\xi_{11}}\right),\] \[\ket{\Xi_{4}} =\frac{1}{\sqrt{2}}\left(\ket{\xi_{10}}-\ket{\xi_{11}}\right).\] Thus, our RNN wave function can be written approximately as a uniform superposition of the MES \(\ket{\Xi_{1}}\) and \(\ket{\Xi_{3}}\), i.e. \[\ket{\Psi_{\text{RNN}}}\approx\frac{1}{\sqrt{2}}\left(\ket{\Xi_{1}}+\ket{\Xi _{3}}\right).\]
2305.17063
Vecchia Gaussian Process Ensembles on Internal Representations of Deep Neural Networks
For regression tasks, standard Gaussian processes (GPs) provide natural uncertainty quantification, while deep neural networks (DNNs) excel at representation learning. We propose to synergistically combine these two approaches in a hybrid method consisting of an ensemble of GPs built on the output of hidden layers of a DNN. GP scalability is achieved via Vecchia approximations that exploit nearest-neighbor conditional independence. The resulting deep Vecchia ensemble not only imbues the DNN with uncertainty quantification but can also provide more accurate and robust predictions. We demonstrate the utility of our model on several datasets and carry out experiments to understand the inner workings of the proposed method.
Felix Jimenez, Matthias Katzfuss
2023-05-26T16:19:26Z
http://arxiv.org/abs/2305.17063v1
# Vecchia Gaussian Process Ensembles on Internal Representations of Deep Neural Networks ###### Abstract For regression tasks, standard Gaussian processes (GPs) provide natural uncertainty quantification, while deep neural networks (DNNs) excel at representation learning. We propose to synergistically combine these two approaches in a hybrid method consisting of an ensemble of GPs built on the output of hidden layers of a DNN. GP scalability is achieved via Vecchia approximations that exploit nearest-neighbor conditional independence. The resulting deep Vecchia ensemble not only imbues the DNN with uncertainty quantification but can also provide more accurate and robust predictions. We demonstrate the utility of our model on several datasets and carry out experiments to understand the inner workings of the proposed method. nearest neighbors; Vecchia approximation; Gaussian process ensemble ## 1 Introduction In recent years, deep neural networks (DNNs) have achieved remarkable success in various tasks such as image recognition, natural language processing, and speech recognition. However, despite their excellent performance, these models have certain limitations, such as their lack of uncertainty quantification (UQ). Much of UQ for DNNs is based on a Bayesian approach that models network weights as random variables [22] or has involved ensembles of networks [18]. Gaussian processes (GPs) provide natural UQ, but they lack the representation learning that makes DNNs successful. Standard GPs are known to scale poorly with large datasets, but GP approximations are plentiful and one such method is the Vecchia approximation [31, 16], which uses nearest-neighbor conditioning sets to exploit conditional independence among the data. In high dimensions, finding the appropriate conditioning sets is challenging, so a procedure that can discover the conditional independence can expand the applicability of the Vecchia approximation. While the literature on Bayesian neural networks and deep ensembles is rich, less has been done on using the outputs of internal layers of DNNs to quantify uncertainty for the neural network. In this paper we propose the deep Vecchia ensemble (DVE), which uses the outputs of internal layers in conjunction with the Vecchia approximation to provide UQ for DNNs. By using the outputs of internal layers of the DNN, we inject the representation learning of DNNs into GPs. As a result, we are able to use the DNN to discover better conditioning sets for the Vecchia approximation. A high-level summary of this approach is given in Figure 1, where the outputs of internal layers are given in the bottom right three panels and the different conditioning sets extracted from the neural network are shown in the panel on the left. We believe that the proposed methodology has the potential to improve the reliability and robustness of deep learning models in various applications while simultaneously improving Vecchia GPs. The remainder of the paper is organized as follows: We begin in Section 2 with a review of related concepts and define the intermediate representations upon which we later build. In Section 3, we contextualize our approach within existing work on uncertainty quantification for neural networks and GP approximation. The proposed methodology is detailed in Section 4. We then apply our model to pretrained neural networks in Section 5. This is followed by an experiment in Section 6 that explores how our model works. In Section 7, we discuss potential application areas for our method, the limitations of our approach, and future research directions that could improve upon our work. ## 2 Preliminaries In this section, we provide background information on GPs, including GP regression, as well as the Vecchia approximation to GPs. We also review GP ensembles, which combine multiple GPs to provide a distribution over the response variable. Finally, we define internal spaces as intermediate representations of the input data generated by the neural network. ### Modeling functions with Gaussian processes Suppose we have a dataset \(\mathcal{D}=\{\mathbf{X},\mathbf{y}\}=\{\mathbf{x}_{i},y_{i}\}_{i=1}^{n}\), where \(\mathbf{x}_{i}\in\mathcal{X}\), \(y_{i}=f(\mathbf{x}_{i})+\epsilon_{i}\), \(\epsilon_{i}\stackrel{{ iid}}{{\sim}}\mathcal{N}(0,\sigma^{2})\), and \(f:\mathcal{X}\rightarrow\mathbb{R}\). Our goal is to make predictions at \(n_{p}\) test locations \(\mathbf{X}^{*}=\{\mathbf{x}_{j}^{*}\}_{j=1}^{n_{p}}\) in a probabilistic fashion. A common approach is to assign the latent function \(f\) a zero-mean Gaussian process prior with covariance function \(K_{\mathbf{\theta}}(\mathbf{x},\mathbf{x}^{\prime})\) parameterized by \(\mathbf{\theta}\). Letting \(\mathbf{y}^{*}=\{f(\mathbf{x}_{j}^{*})+\epsilon_{j}\}_{j=1}^{n_{p}},p(\mathbf{y}^{*},\mathbf{ y})\) will be jointly multivariate normal, which implies: \[p(\mathbf{y}^{*}|\mathbf{y})=\mathcal{N}(\mathbf{\mu}^{*},\mathbf{K}_{\mathbf{\theta}^{*}}+\sigma^ {2}\mathbf{I}_{n_{p}}), \tag{1}\] where \(\mathbf{\mu}^{*}\) and \(\mathbf{K}_{\mathbf{\theta}^{*}}\) depend on \(\mathcal{D}\) and \(\mathbf{\theta}\). To select \(\mathbf{\theta}^{*}\), we maximize the marginal log-likelihood: \[p(\mathbf{y})=\int p(\mathbf{y}|\mathbf{f})p(\mathbf{f})d\mathbf{f}=\mathcal{N}(\mathbf{0},\mathbf{K}_{ \mathbf{\theta}}(\mathbf{X},\mathbf{X})+\sigma^{2}\mathbf{I}_{n}). \tag{2}\] Both Equation (1) and (2) have time and space complexities of \(\mathcal{O}(n^{3})\) and \(\mathcal{O}(n^{2})\), respectively. ### Approximating Gaussian processes using Vecchia The joint distribution of a collection of random variables, \(\mathbf{y}=\{y_{1},y_{2},...,y_{n}\}\), can be decomposed as \(p(\mathbf{y})=\prod_{i=1}^{n}p(y_{i}|\mathbf{y}_{h(i)})\), \(h(i)=\{1,...,i-1\}\). Vecchia's approximation [31] conditions the \(i^{th}\) observation on a set indexed by \(g(i)\), with \(g(i)\subset h(i)\). The approximate joint distribution is then, \[p(\mathbf{y})\approx\prod_{i=1}^{n}p(y_{i}|\mathbf{y}_{g(i)}). \tag{3}\] Figure 1: Nearest-neighbor conditioning sets for different input representations obtained via the layers of a DNN. Left: The red point has different nearest-neighbor conditioning sets based on the metric induced by the internal representations of the neural network. The brown, blue, and pink shaded areas denote the regions in input space that will be mapped to a hypersphere in the first, second, and third intermediate spaces, respectively. The conditioning sets derived from the different regions may overlap, as in the blue and brown region, or be disjoint from the others as in the pink region. Right: The data used to train the network is mapped through each layer and the output, or feature map, of each datum is stored. For a red test point, we asses uncertainty by asking: How many instances in the training data took similar paths through the network? Letting \(\mathbf{\tilde{y}}=(\mathbf{y},\mathbf{y}^{*})\), where \(\mathbf{y}^{*}=\{y^{*}_{n+1},...,y^{*}_{n+n_{p}}\}\) are test points ordered after the observed data, the approximate conditional distribution for \(\mathbf{y}^{*}\) is: \[p(\mathbf{y}^{*}|\mathbf{y})\approx\prod_{j=n+1}^{n+n_{p}}p(y^{*}_{j}|\mathbf{\tilde{y}}_{g (j)}). \tag{4}\] For fixed \(g(i)\)'s, such that \(|g(i)|\leq m\) with \(m\ll n\), the approximation in (3) reduces the in-memory space complexity of Equation (2) to \(\mathcal{O}(n_{p}m)\). Using the approximation in Equation (4) reduces the time and space complexity of Equation (1) to \(\mathcal{O}(n_{p}m)\). An additional approximation utilizing mini-batches [14] of size \(n_{b}\) results in an in-memory space complexity of \(\mathcal{O}(n_{b}m^{2})\) for Equation (2) and a time complexity of \(\mathcal{O}(n_{b}m^{3})\). For the \(i^{th}\) data point, the \(m\) ordered nearest neighbors \(g(i)\) are typically determined based on a distance metric, such as Euclidean distance, between the input variables \(\mathbf{x}\). To compute these ordered nearest neighbors, it is assumed that the data has been sorted according to some ordering procedure, and for each \(i\), the \(m\) nearest neighbors that precede \(i\) in the ordering are selected. ### Ensembling Gaussian processes An ensemble of \(L\) GPs can be formed by partitioning the training data into disjoint subsets and fitting independent GPs to each subset of the training data. Assuming this independence structure, the hyperparameters, \(\mathbf{\theta}_{k}\), can be learned by maximizing the log-marginal likelihood, \(\sum_{k=1}^{L}\) log \(p_{k}(\mathbf{y}^{(k)}|\mathbf{\theta}_{k})\). While in this work we allow each GP to have its own hyperparameters, product-of-expert GP ensembles [30; 3; 7; 26] typically share hyperparameters \(\mathbf{\theta}\) between the GPs. We are assuming \(\mathbf{\theta}_{k}\) contains all the hyperparameters in both the GP kernel and the likelihood. The prediction of \(y^{*}\) at a location \(\mathbf{x}^{*}\) can be formed by using a generalized product of experts (gPoE) [3], an extension of product of experts (PoE) [13], which combines the predictions of \(L\) models by using a weighted product of the predictions. Letting \(p_{k}^{\beta_{k}(\mathbf{x}^{*})}(y^{*})\) denote the \(k^{th}\) model's predictive distribution for \(y^{*}\), the combined distribution for \(y^{*}\) is given by, \[p(y^{*})\propto\prod_{k=1}^{L}p_{k}^{\beta_{k}(\mathbf{x}^{*})}(y^{*}). \tag{5}\] The function \(\beta_{k}(\mathbf{x}^{*})\) is a weight for the \(k^{th}\) model's prediction based on the input \(\mathbf{x}^{*}\) such that \(\sum_{k=1}^{L}\beta_{k}(\mathbf{x}^{*})=1\). When each of the \(k\) predictions is a Gaussian distribution with mean \(\mu_{k}(\mathbf{x}^{*})\) and variance \(\mathbf{\sigma}_{k}(\mathbf{x}^{*})\), the combined distribution is also Gaussian with combined mean and variance given by, \[\mu(\mathbf{x}^{*})=\sigma^{2}(\mathbf{x}^{*})\sum_{k=1}^{L}\beta_{k}(\mathbf{x}^{*}) \sigma_{k}^{-2}(\mathbf{x}^{*})\mu_{k}(\mathbf{x}^{*}),\qquad\quad\sigma^{2}(\mathbf{x}^{* })=(\sum_{k=1}^{L}\beta_{k}(\mathbf{x}^{*})\sigma_{k}^{-2}(\mathbf{x}^{*}))^{-1}. \tag{6}\] For details on computing \(\beta_{k}\) see Appendix F. Although we have focused on the combined predictive distribution for the observed \(y^{*}\), in the case of a Gaussian likelihood the same results hold for the combined distribution of the latent function \(f^{*}\) (with the variance terms changed to reflect we are working with the noiseless latent function). This implies we can choose to combine in either \(f\) or \(y\) space. For details on this see Appendix G. ### Extracting information from intermediate representations Consider a finite sequence of functions \(f_{1},...,f_{L+1}\) such that \(f_{1}:\mathbb{R}^{d}\rightarrow\mathcal{A}_{1}\), \(f_{2}:\mathcal{A}_{1}\rightarrow\mathcal{A}_{2}\),...,\(f_{L}:\mathcal{A}_{L-1}\rightarrow\mathcal{A}_{L}\), \(f_{L+1}:\mathcal{A}_{L}\rightarrow\mathbb{R}\), where each \(\mathcal{A}_{i}\subset\mathbb{R}^{d_{i}}\). The \(k^{th}\)**intermediate representation** of a point \(\mathbf{x}\in\mathbb{R}^{d}\) with respect to the sequence \(f_{1},...,f_{L+1}\) is defined to be \(\mathbf{e}_{k}(\mathbf{x}):=(f_{k}\circ f_{k-1}\circ...\circ f_{1})(\mathbf{x})\). The \(k^{th}\)**intermediate space**\(\mathcal{A}_{k}\) is the image of \((f_{k}\circ f_{k-1}\circ...\circ f_{1})(\cdot)\). A **composite model**\(\mathcal{M}\) is the composition of \(L\) such functions \(M(\mathbf{x}):=(f_{L+1}\circ f_{L-1}\circ...\circ f_{2}\circ f_{1})(\mathbf{x})\). A feed-forward neural network with \(L\) layers is an example of a composite model and the output of each layer in the network, excluding the final layer, results in an intermediate space. Related work We summarize work on improving scalability of GPs, quantifying uncertainty of neural networks, and using the outputs of internal layers of a neural network. There are several common directions of research that have focused on scaling GPs. The most relevant for our work are the Vecchia approximation [31; 16], inducing-point GPs [12; 29], and ensembling methods [30; 7; 3]. The Vecchia approximation is based on an ordered conditional approximation of joint distributions, where conditioning sets are often computed using nearest neighbors, which is difficult in high input dimensions. Inducing point methods assume that the covariance matrix has a low-rank structure but they can result in overly smooth approximations. Finally, ensembling methods assign an independent GP to partitions of the data, and while this greatly improves scalability, much work has been done on improve and understanding the failure modes of these models [5; 26]. The work on uncertainty quantification (UQ) for deep learning most relevant to our paper are those involving ensembles of deep neural networks (DNNs) [18] and those using Bayesian neural networks (BNNs) [22]. Ensemble-based methods independently train DNNs on subsamples of the data. BNNs take the weights of the neural network to be random variables and use Bayes' rule to derive a posterior for the weights given the data. Each realization of the weights from the posterior implies a different network. By marginalizing over the posterior when forming predictions, the uncertainty in the weights is propagated to the uncertainty in the prediction. The posterior is rarely available in closed form and therefore we rely on approximate sampling or posterior approximation [22; 21; 4; 8]. Both BNNs and ensembles of DNNs consider multiple networks, but in this work we use a single network at a time. Therefore, our work can always be used within either framework and should not be taken as alternative to these methods, but simply as an extension. In this work we heavily use the outputs of internal layers of DNNs, which has been done for explainable AI [2], deep kernel learning [32], deep Gaussian process [6] and within Resnet [11]. For explainable AI, the goal is to understand what features the network is learning to extract. Resnet's skip connections ensure, among other things, that adding new layers doesn't reduce the class of functions the network can model. However, neither of these two approaches use the outputs of the internal layers for UQ of DNNs or to improve GP approximations. While deep kernel learning is closest to our work, we use more than just the final layer which we believe is critical for uncertainty quantification and predictive performance (see the S-curve example in Section 6). Finally, unlike the deep Gaussian process, we are not feeding the output of one GP into the next, but instead feeding the output of network layers into Vecchia GPs. ## 4 Proposed methodology This section outlines our proposed methodology, beginning with a concise model summary. We then describe our approach for extracting a sequence of datasets from a pretrained neural network, followed by a discussion of our conditioning set computation process. Next, we detail our method for building an ensemble using the collections of conditioning sets. Throughout, we explain the design choices that informed the development of our model. ### Model summary We are provided with a dataset \(\mathcal{D}\) comprising \(n\) input-output pairs, \(\mathcal{D}=\{\mathbf{x}_{i},y_{i}\}_{i=1}^{n}\), and a composite model \(\mathcal{M}:\mathcal{X}\rightarrow\mathbb{R}\) that is trained to predict \(y_{i}\) given the corresponding \(\mathbf{x}_{i}\). Initially, we map the inputs \(\{\mathbf{x}_{i}\}_{i=1}^{n}\) to their respective intermediate representations, \(\{\mathbf{e}_{k}(\mathbf{x}_{i})\}_{i=1}^{n}\), where \(k=1,...,L\). We generate a collection of conditioning sets, \(\mathcal{G}_{k}\), for each set of intermediate representations, \(\{\mathbf{e}_{k}(\mathbf{x}_{i})\}_{i=1}^{n}\), by computing ordered nearest neighbors based on the Euclidean distance between the intermediate representations. The resulting tuple for each intermediate space \((\{\mathbf{e}_{k}(\mathbf{x}_{i}),\mathbf{y}_{i}\}_{i=1}^{n},\mathcal{G}_{k})\) is combined with an automatic relevance determination (ARD) kernel \(K_{\mathbf{\theta}_{k}}(\cdot,\cdot)\) of the appropriate dimension to form an ensemble of deep Vecchia Gaussian processes (GPs). At test time, the test point will be mapped to its intermediate spaces and each Vecchia GP will provide a predictive mean and variance. The predictions are then combined via a product of experts to form a single mean and covariance estimate. The diagram in Figure 2 provides a high-level overview of the proposed model. ### Dataset extraction from a neural network To begin we describe how we extract a sequence of datasets from a pretrained neural network and the data it was trained on. Suppose the first layer of our pretrained network is of the form \(f_{1}(\mathbf{x})=\sigma(\mathbf{x}^{T}\mathbf{W}^{(1)}+\mathbf{b}^{(1)})\), with, \(\mathbf{W}^{(1)}\in\mathbb{R}^{dxd_{1}}\), \(\mathbf{b}_{1}\in\mathbb{R}^{d_{1}}\) and \(\sigma_{1}:\mathbb{R}^{d_{1}}\rightarrow\mathbb{R}^{d_{1}}\). Then \(\mathbf{x}\)'s first intermediate representation, \(\mathbf{e}_{1}(\mathbf{x}):=f_{1}(\mathbf{x})\), will lie in a subset of \(\mathbb{R}^{d_{1}}\), that is \(\mathbf{e}_{1}(\mathbf{x})\in\mathcal{A}_{1}\subset\mathbb{R}^{d_{1}}\). Similarly, \(\mathbf{x}\)'s second intermediate representation is given by, \(\mathbf{e}_{2}(\mathbf{x}):=(f_{2}\circ f_{1})(\mathbf{x})\) where \(f_{2}(\mathbf{e}_{2}(\mathbf{x}))=\sigma(\mathbf{e}_{2}(\mathbf{x})\mathbf{W}^{(2)}+\mathbf{b}^{(2)})\) with \(\mathbf{W}^{(2)}\in\mathbb{R}^{d_{1}xd_{2}}\), \(\mathbf{b}_{2}\in\mathbb{R}^{d_{2}}\) and \(\sigma_{2}:\mathbb{R}^{d_{2}}\rightarrow\mathbb{R}^{d_{2}}\). We can define the remaining \(L-2\) intermediate representations in a similar way and by combining the intermediate representations with the responses we can generate \(L\) datasets \(\{\mathcal{D}_{k}\}_{k=1}^{L}\) where \(\mathcal{D}_{k}=\{\mathbf{e}_{k}(\mathbf{x}_{i}),y_{i}\}_{i=1}^{n}\). We assume the same ordering of observations for all \(L\) datasets, so \(y_{i}\) will always refer to the same response for every \(\mathcal{D}_{k}\). An example of generating \(L\) datasets is visualized in Figure 3 for the bike UCI dataset. Each of the panels contains one dataset to which we fit a Vecchia GP. ### Conditioning-set selection Figure 3 suggests that the function \(\mathbf{e}_{k}(\mathbf{x})\mapsto y\) may be smooth and therefore we choose to estimate this function using a Gaussian process. For computational purposes, we approximate each of the GPs using Vecchia with nearest neighbors computed in the corresponding intermediate space. Doing so gives rise to several conditioning sets for each observation \(\mathbf{x}_{i}\). For example, consider the \(i^{th}\) input \(\mathbf{x}_{i}\) and its previously ordered nearest neighbors. In Figure 3\(\mathbf{x}_{i}\) is given by the blue dot in the upper left most panel and the black crosses are \(\mathbf{x}_{i}\)'s previously ordered nearest neighbors. As \(\mathbf{x}_{i}\) and its nearest neighbors are mapped to their first intermediate representations, we see that \(\mathbf{x}_{i}\)'s new nearest neighbors (the magenta crosses) are different than the original conditioning set. The magenta crosses in each subsequent panel represent \(\mathbf{x}_{i}\)'s previously ordered nearest neighbors in the corresponding intermediate space. The magenta crosses changes from panel to panel, giving different conditioning sets as \(\mathbf{x}_{i}\) moves through the network. Figure 2: The input data \(\mathbf{x}_{1},\mathbf{x}_{2},\mathbf{x}_{3},\ldots\) are fed into the network, and the output of each intermediate layer creates a new intermediate space (e.g., \(\mathcal{A}_{1}\)). Within each intermediate space, we can consider the nearest neighbors of any point in the space. For example, in the box labeled \(\mathcal{A}_{1}\), we see that the red test point \(x^{*}\) is closest to \(\mathbf{e}_{1},\mathbf{e}_{3},\mathbf{e}_{4}\) (in the first intermediate space), and in \(\mathcal{A}_{2}\), the closest points to \(x^{*}\) are \(\mathbf{e}_{2},\mathbf{e}_{3},\mathbf{e}_{4}\). We define \(L\) Vecchia GPs, denoted by \(\mathcal{V}_{1},\ldots,\mathcal{V}_{L}\), which estimate a distribution over the response using the nearest neighbors in their respective intermediate spaces. The resulting distributions are combined in a product-of-experts fashion to yield a single distribution parameterized by \(\hat{\mu}\) and \(\hat{\sigma}\). Although the neural network generates a point estimate \(\hat{y}^{*}\) for the value of \(y^{*}\), our method provides a distribution that better reflects the uncertainty in the prediction. Letting \(g_{k}(i)\) denote the conditioning set for observation \(i\) from intermediate space \(k\), we can consider the **collection of conditioning sets**, which we denote by \(\mathcal{G}_{k}:=\{g_{k}(1),...,g_{k}(n)\}\). An \(L+1\) layer network will have \(L\) collections of conditioning sets, \(\mathcal{G}_{1},...,\mathcal{G}_{L}\). By using several collections of conditioning sets, we are utilizing the unique information present in the panels titled 'First Intermediate Space' through 'Fifth Intermediate Space'. This approach can lead to better predictions compared to using just one collection of conditioning sets. ### Ensemble building from conditioning sets Each \(\mathcal{G}_{k}\) represents a hypothesis about the conditional independence structure of the data. For instance, with time series data produced by an AR(2) process, the \(i^{th}\) observation should depend on the two preceding observations in time, while for an AR(1) process, the conditioning set is limited to the immediately preceding data point. However, in general, it is unclear which \(\mathcal{G}_{k}\) is most appropriate. Since we do not know which \(\mathcal{G}_{k}\) is most suitable, we propose an ensemble of Vecchia GPs, each of which is based on one of the collections of conditioning sets. With each Vecchia GP independently fit to its corresponding dataset (e.g. \(\mathcal{D}_{k}\)), we combine the predictions from each member of the ensemble in a product-of-experts fashion. Details on training and prediction are provided in Appendix A. ## 5 Application to pretrained models We report results using networks that were pretrained on a collection of UCI regression tasks and a network pretrained for a face age estimation task. In each case, we compare our results to those obtained with the original network, and, where appropriate, we also compare against other GP-based methods, such as SVI-GP. All GPs use an ARD Matern 3/2 kernel. We assess performance using root mean square error (RMSE) and negative log-likelihood (NLL) on the standardized test data for the UCI regression tasks. We use the mean absolute error (MAE) on the original scale for face age estimation. We combine our predictions in \(y\)-space and details on this choice are given in Appendix G. ### UCI benchmarks We trained a feed-forward neural network on three train-validation-test splits of UCI regression tasks. After training, we constructed the deep Vecchia ensemble as detailed in Section 4 and made predictions for the test set using the Figure 3: The 2D TSNE projection of input points in the original space of the bike UCI dataset (top left panel) and intermediate spaces (remaining five panels) with the color denoting the response value corresponding to each point. The blue dot is a fixed test point that has been propagated through the network. The black crosses denote the ordered nearest neighbors of the blue dot in the original input space, and their corresponding intermediate representations. The magenta crosses denote the ordered points nearest to the blue point within each intermediate space. method described in Appendix A. Additionally, we trained the stochastic variational inference GP (SVI-GP) of Hensman, Fusi, and Lawrence [12] and the scaled Vecchia GP of Katzfuss, Guinness, and Lawrence [17] separately on the training data for prediction on the test set. Table 1 compares NLL (left column under each dataset) and RMSE (right column under each dataset) for different methods on UCI datasets. The bold value in each column is the best performing (lower is better). The value in parentheses is the sample standard deviation of the mean of the metric over the three different data splits. The last line of the table specifies the sample size (\(n\)) and dimension (\(d\)) for each dataset. Our method is given by "DVE", "Neural Net" represents the original neural network, "Vecchia" stands for the scaled Vecchia model of Katzfuss, Guinness, and Lawrence [17], and SVI-GP is the model from Hensman, Fusi, and Lawrence [12]. We can see that DVE performs the best among all given models in terms of NLL and is second only to the neural network in RMSE on two datasets. Of course the DVE estimates are built upon the network, but it is remarkable how the DVE is able to provide uncertainty estimates to the neural network's predictions, providing much better performance than existing GP approximations. ### Face age estimation We now use our deep Vecchia ensemble with a pretrained network for face age estimation. Face age estimation involves predicting the age of a subject in a photograph. The moving window regression (MWR) of Shin, Lee, and Kim [27] is a CNN-based approach that uses ordinal regression to solve this problem. The MWR procedure uses a CNN to extract features of a test image as well as two references images. These features are concatenated together and fed into a feedforward neural network, called a \(\rho\)-regressor, to predict the age of the test image relative to the reference images. We use the pretrained global \(\rho\)-regressor for the UTKFace dataset from Shin, Lee, and Kim [27] to form the intermediate representations of a deep Vecchia ensemble as described in Section 4.1. For more details on the MWR procedure, the UTKFace dataset, our processing and comparisons, see Appendix D. In Table 2, we compare the results of the original global model (Global \(\rho\)-regressor), the deep Vecchia ensemble (DVE), a scaled Vecchia GP [17] fit to the output of the second to last layer (Vecchia) and the SVI-GP of Hensman, Fusi, and Lawrence [12] (SVI-GP) also fit to the outputs of the second-to-last layer. The latter two approaches can be viewed as variants of deep-kernel GPs. By using the deep Vecchia ensemble, we were able to decrease the MAE for the global model and additionally provide an estimate of the model's uncertainty. The NLL for the deep ensemble also outperformed the models that provided estimates of uncertainty. The difference in NLL and MAE between the Vecchia GP on the second-to-last layer and the deep ensemble Vecchia supports our claim that there is value in using the information in the intermediate layers for both predictions and uncertainty quantification. ## 6 Exploring intermediate spaces In this section we investigate the intermediate spaces of a small network to understand how the deep Vecchia ensemble is using different layers. We fit a feed forward network to samples from the S-curve dataset [28]. In the bottom row of Figure 4, we visualize the internal representations of the network. In the top row of Figure 4, we visualize the original \begin{table} \begin{tabular}{l c c c c c c c c c c c c c c c c c c c c c c c c c c} \hline \hline Method & \multicolumn{3}{c}{Kish-i0K} & \multicolumn{3}{c}{Elevators} & \multicolumn{3}{c}{KEGG} & \multicolumn{3}{c}{bike} & \multicolumn{3}{c}{Protein} \\ \hline DVE & **-1.445 (0.011)** & **0.057 (0.001)** & **0.424 (0.010)** & 0.373 (0.007) & **-1.021 (0.030)** & 0.087 (0.002) & **-3.154 (0.109)** & **0.016 (0.002)** & **0.646 (0.029)** & **0.494 (0.014)** \\ Neural Net & N/A & 0.069 (0.000) & N/A & **0.343 (0.006)** & N/A & **0.085 (0.002)** & N/A & 0.018 (0.001) & N/A & 0.014 (0.002) \\ Vecchia & -0.028 (0.015) & 0.209 (0.015) & 0.525 (0.015) & 0.414 (0.006) & -0.497 (0.026) & 0.098 (0.002) & 0.229 (0.010) & 0.183 (0.001) & 0.581 (0.003) & 0.571 (0.002) \\ SVM-GP & -0.093 (0.008) & 0.068 (0.005) & 0.455 (0.017) & 0.381 (0.007) & -0.912 (0.033) & 0.098 (0.003) & -1.400 (0.047) & 0.073 (0.004) & 1.085 (0.001) & 0.713 (0.001) \\ \hline \((n\_d)\) & \multicolumn{3}{c}{(400K,8)} & \multicolumn{3}{c}{(16.6K,18)} & \multicolumn{3}{c}{(48.8K,20)} & \multicolumn{3}{c}{(17.4K, 17)} & \multicolumn{3}{c}{(45.7K,89)} \\ \hline \hline \end{tabular} \end{table} Table 1: NLL (left columns) and RMSE (right columns) on UCI benchmarks \begin{table} \begin{tabular}{l c c} \hline \hline & MAE & NLL \\ \cline{2-4} Global \(\rho\)-Regressor & 4.650 & N/A \\ DVE & 4.636 & 1.085 \\ SVI-GP & 4.661 & 2.450 \\ Vecchia & 4.693 & 1.789 \\ \hline \hline \end{tabular} \end{table} Table 2: MAE and NLL for UTKFace dataset data and the predictions formed by the deep Vecchia ensemble and the neural network. For more details, see Appendix E. The first internal representation had more separation between yellow and green points compared to the final representation, indicating the earlier layer held information distilled away by the final layer. Deep Vecchia ensemble preserved this information, while the neural network discarded it. The deep Vecchia prediction closely resembled the original dataset, supported by lower RMSE values compared to the neural network's prediction. We believe a similar phenomenon is occurring with the models used in Section 5. To check whether the observed behavior held more generally we repeated the experiment using different networks and for every network we checked the results was the same. For example, with four hidden layers with (10, 5, 2, 2) units, the RMSE dropped from 0.18 to 4.2e-5. Changing the number of units per layer to (16,8,4,4) showed the RMSE drop from 0.007 to 3.9e-5. ### Interpreting internal representations Our model can be seen from two perspectives: an ensembling of Vecchia GPs, and a method for quantifying epistemic uncertainty in neural networks. Below we relate both view points to the results in this section. From the Vecchia ensemble stance, we can see the intermediate representations in Figure 4 as providing three different distance metrics that can be used to compute possibly unique conditioning sets. The requirements to make this connection are formalized in Proposition 1. The proof of which can be found in Appendix B: **Proposition 1**.: _Consider a sequence of injective functions \(f_{1},..,f_{L},f_{L+1}\) and a sequence of metric spaces \(\{(M_{i},d_{i})\}_{i=1}^{L+1}\) such that \(f_{i}:\mathcal{M}_{i}\rightarrow\mathcal{M}_{i+1}\) for \(i=1,...,L\). Then \(\{(M_{i},\tilde{d}_{i})\}_{i=2}^{L+1}\) defines a sequence of metric spaces, where \(\tilde{d}_{i}(x,y):=d_{i}(\mathbf{e}_{i}(\mathbf{x}),\mathbf{e}_{i}(\mathbf{x}^{\prime}))\) for any \(\mathbf{x},\mathbf{x}^{\prime}\in M_{1}\), with \(\mathbf{e}_{i}(\mathbf{x}):=f_{i}\circ f_{i-1}\circ...\circ f_{1}(\mathbf{x})\)._ Proposition 1's requirement that the function be injective ensures unique internal representations for unique input points, but continuity is not required. Discontinuous layers are useful when modeling functions with high Lipschitz constants or discontinuities. In such cases, predicting values using Euclidean nearest neighbors in \(\mathbb{R}^{d}\) can be inaccurate, but the function may be continuous with respect to a different distance metric on \(\mathbb{R}^{d}\). The difference in continuity Figure 4: This figure shows the original data from the S-curve dataset in the top left panel. The top middle panel depicts the predictions generated by the deep Vecchia Ensemble, while the top right panel shows the predictions from the neural network. The three panels in the bottom row show the internal representations of the data from the three hidden layers of the network. The color of the points in each of these panels corresponds to the response value. The arrows pointing to the deep Vecchia prediction emphasize that the deep Vecchia ensemble builds predictions from all three layers. In contrast, the final network prediction relies on just the final layer, as indicated by the single arrow pointing to it. is reminiscent of what we observed in Figure 3, where the function became progressively smoother for later intermediate spaces. While a similar goal is accomplished using deep kernels, by using multiple intermediate spaces we allow for competing hypothesis about what distance metric is best to be used. From the perspective of uncertainty in neural networks, we can see the sequence of distance metrics as measuring how far a test point is away from instances in the training data. Of course, if a test point is close to existing training data with respect to the entire sequence of distance metrics, then the ensemble of Vecchia GPs will provide an estimate with small uncertainty estimates. If the test point is out of distribution, then the internal representations will not match existing data and we will return something close to the GP prior. ## 7 Conclusion We presented an approach for UQ of pretrained neural networks. Our model has the added benefit of improving the Vecchia approximation and is the first approach to ensemble over the conditioning sets of the Vecchia approximation. The effectiveness of the model was shown on benchmark regression tasks and face age estimation. The study on internal representations provided insight into the inner workings of the approach. We believe that our approach can find success in latent space Bayesian optimization [9] procedures that use a neural network to map latent representations to the response of interest, especially if the map between the latent space and the objective is not smooth. Our approach may reduce the need to provide additional structure to the latent space to allow for GP regression such as in Grosnit et al. [10]. We also believe that our method would work well in any application where nearest neighbors cannot be computed using Euclidean distance between inputs alone, such as high-dimensional regression. Our proposed model has four key limitations. First, we need the data used to train the network which may not always be available. One possible solution is to simultaneously fit the network and an ensemble inducing-point GPs or to use a smaller validation set to build the deep Vecchia ensemble. The second limitation is that we combined predictions in \(y\)-space, so we are limited to using a Gaussian likelihood. This can be remedied by combining in \(f\)-space and defining a procedure to combine the noise terms in each of the Vecchia GP's likelihoods. The third limitation is that we use a DNN with fixed weights but many BNN methods assume the weights are random variables. To incorporate the DVE into BNNs, each intermediate representation can be expressed by a distribution rather than a point estimate. Then an error-in-variables Vecchia GP can be made by combining the kernel from Bachoc et al. [1] with the correlation Vecchia strategy of Kang and Katzfuss [15]. The final limitation is that our exploration was limited to feed-forward layers. To extend our model to more general architectures, a distance metric between layer outputs can be defined to make our approach applicable. Given our promising results and a clear path towards solving our current limitations, we believe the deep Vecchia ensemble should be adopted more broadly, and we would be excited to see others improve upon our work.
2310.00965
Effective Learning with Node Perturbation in Multi-Layer Neural Networks
Backpropagation (BP) remains the dominant and most successful method for training parameters of deep neural network models. However, BP relies on two computationally distinct phases, does not provide a satisfactory explanation of biological learning, and can be challenging to apply for training of networks with discontinuities or noisy node dynamics. By comparison, node perturbation (NP) proposes learning by the injection of noise into network activations, and subsequent measurement of the induced loss change. NP relies on two forward (inference) passes, does not make use of network derivatives, and has been proposed as a model for learning in biological systems. However, standard NP is highly data inefficient and unstable due to its unguided noise-based search process. In this work, we investigate different formulations of NP and relate it to the concept of directional derivatives as well as combining it with a decorrelating mechanism for layer-wise inputs. We find that a closer alignment with directional derivatives together with input decorrelation at every layer strongly enhances performance of NP learning with large improvements in parameter convergence and much higher performance on the test data, approaching that of BP. Furthermore, our novel formulation allows for application to noisy systems in which the noise process itself is inaccessible.
Sander Dalm, Marcel van Gerven, Nasir Ahmad
2023-10-02T08:12:51Z
http://arxiv.org/abs/2310.00965v4
# Effective Learning with Node Perturbation in Deep Neural Networks ###### Abstract Backpropagation (BP) is the dominant and most successful method for training parameters of deep neural network models. However, BP relies on two computationally distinct phases, does not provide a satisfactory explanation of biological learning, and can be challenging to apply for training of networks with discontinuities or noisy node dynamics. By comparison, node perturbation (NP) proposes learning by the injection of noise into the network activations, and subsequent measurement of the induced loss change. NP relies on two forward (inference) passes, does not make use of network derivatives, and has been proposed as a model for learning in biological systems. However, standard NP is highly data inefficient and unstable due to its unguided noise-based search process. In this work, we investigate different formulations of NP and relate it to the concept of directional derivatives as well as combining it with a decorrelating mechanism for layer-wise inputs. We find that a closer alignment with directional derivatives together with input decorrelation at every layer significantly enhances performance of NP learning, making its performance on the train set competitive with BP and allowing its application to noisy systems in which the noise process itself is inaccessible. ## 1 Introduction Backpropagation (BP) is the workhorse of modern artificial intelligence. It provides an efficient way of performing multilayer credit assignment, given a differentiable neural network architecture and loss function (Linnainmaa, 1970). Despite BP's successes, it requires an auto-differentiation framework for the backward assignment of credit, introducing a distinction between a forward, or inference phase, and a backward, or learning phase, increasing algorithmic complexity and impeding implementation in (neuromorphic) hardware (Kaspar et al., 2021; Zenke & Neftci, 2021). Furthermore, BP has long been criticized for its lack of biological detail and plausibility (Grossberg, 1987; Crick, 1989; Lillicrap et al., 2016), with significant concerns again being the two separate forward and backward phases, but also its reliance on gradient propagation and the non-locality of the information required for credit assignment. Alternative algorithms have been put forth over the years, though their inability to scale and difficulty in achieving levels of performance comparable to BP have held back their use. One such algorithm is node perturbation (NP) (Dembo & Kailath, 1990; Cauwenberghs, 1992). In NP, node activations are perturbed by a small amount of random noise. Weights are then updated to produce the perturbed activations in proportion to the degree by which the perturbation improved performance. This method requires a measure of the loss function being optimised on a network both before and after the inclusion of noise. Such an approach is appealing because it leverages the same forward pass twice, rather than relying on two computationally distinct phases. It also does not require non-local information other than the global performance signal. Despite these benefits, Hiratani et al. (2022) demonstrate that NP is extremely inefficient compared to BP, requiring two to three orders of magnitude more training cycles, depending on network depth and width. In addition, they found training with NP to be unstable, in many cases due to exploding weights. Another phenomenon uncovered in their work is that the covariance of NP's updates is significantly higher than that of BP, which can mathematically be described as an effect mediated by correlations in the input data. Another proposed approach, referred to as weight propagation (WP), is to inject noise directly into the weights of a network, rather than the activations (Werfel et al., 2003; Fiete & Seung, 2006). This method shows similarities to evolutionary algorithms and can outperform NP in special cases (Zuge et al., 2021). A downside to WP is that there are many more weights than nodes in neural networks, making the exploration space larger thus slowing down learning. In fact, both NP and WP lag far behind BP in terms of efficiency. This is to be expected, however, as noise perturbations conduct a random search through parameter space, rather than being gradient directed. In this work, we put forward three contributions. First, we reframe the process of node perturbation together with the subsequent measurement of the output loss change in terms of directional derivatives. This provides a more solid theoretical foundation for node perturbation and, consequently, a different update rule, which we refer to as iterative node perturbation. Directional derivatives have been related to NP before in the context of forward gradient learning (Baydin et al., 2022; Ren et al., 2022). This approach is different from ours in that the perturbations are used to estimate gradients, but perturbed forward passes are not actually performed, making these approaches unsuitable for use in noisy systems. Second, we introduce an approximation to iterative node perturbation, referred to as activity-based node perturbation, which is more efficient and has the additional advantage that it can be implemented in noisy systems such as imprecise hardware implementations (Gokmen, 2021) or biological systems (Faisal et al., 2008), where the noise itself is not measurable. Third, we propose to use a decorrelation method, first described by Ahmad et al. (2023), to debias layer-wise activations and thereby achieve faster learning. Because NP-style methods directly correlate perturbations of unit activities with changes in a reward signal, decorrelation of these unit activities helps to eliminate confounding effects, making credit assignment more straightforward. In addition, as demonstrated by Hiratani et al. (2022), correlations in the input data lead to more bias in NP's updates by increasing their covariance. By combining decorrelation with the different NP methods, we find that it is possible to achieve orders of magnitude increase in model convergence speed, with performance levels rivalling networks trained by BP in certain contexts. ## 2 Methods ### Node perturbation and its formulations Let us define the forward pass of a fully-connected neural network, with \(L\) layers, such that the output of a given layer, \(l\in{1,2,\ldots,L}\) is given by \[\mathbf{x}_{l}=f\left(\mathbf{a}_{l}\right)\,,\] where \(\mathbf{a}_{l}=\mathbf{W}_{l}\mathbf{x}_{l-1}\) is the pre-activation with weight matrix \(\mathbf{W}_{l}\), \(f\) is the activation function and \(\mathbf{x}_{l}\) is the output from layer \(l\). The input to our network is therefore denoted \(\mathbf{x}_{0}\), and the output \(\mathbf{x}_{L}\). We consider learning rules which update the weights of such a network in the form \[\mathbf{W}_{l}\leftarrow\mathbf{W}_{l}+\eta_{W}\Delta\mathbf{W}_{l}\,,\] where \(\eta_{W}\) is a small, constant learning rate, and \(\Delta\mathbf{W}_{l}\) is a parameter update direction derived from a given algorithm (BP, NP, etc.). Recall that the regular BP update is given by \[\Delta\mathbf{W}_{l}^{\text{BP}}=-\mathbf{g}_{l}\mathbf{x}_{l-1}^{\top} \tag{1}\] with \(\mathbf{g}_{l}=\frac{\partial\mathcal{L}}{\partial\mathbf{a}_{l}}\) the gradient of the loss \(\mathcal{L}\) with respect to the layer activations \(\mathbf{a}_{l}\). In the following, we consider weight updates relative to one pair of inputs \(\mathbf{x}_{0}\) and targets \(\mathbf{t}\). In practice, these updates are averaged over mini-batches. #### 2.1.1 Traditional node perturbation In the most common formulation of NP, noise is injected into each layer's pre-activations and weights are updated in the direction of the noise if the loss improves and in the opposite direction if it worsens. Two forward passes are required: one clean and one noise-perturbed. During the noisy pass, noise is injected into the pre-activation of each layer to yield a perturbed output \[\mathbf{\tilde{x}}_{l}=f\left(\mathbf{\tilde{a}}_{l}+\mathbf{\epsilon}_{l}\right)=f \left(\mathbf{W}_{l}\mathbf{\tilde{x}}_{l-1}+\mathbf{\epsilon}_{l}\right)\,, \tag{2}\] where the added noise \(\mathbf{\epsilon}_{l}\sim\mathcal{N}(\mathbf{0},\sigma^{2}\mathbf{I}_{l})\) is a spherical Gaussian perturbation with no cross-correlation and \(\mathbf{I}_{l}\) is an \(N_{l}\times N_{l}\) identity matrix with \(N_{l}\) the number of nodes in layer \(l\). Note that this perturbation has a non-linear effect as the layer's perturbed output \(\mathbf{\tilde{x}}_{l}\) is propagated forward through the network, resulting in layers deeper in the network being perturbed slightly more than earlier layers. Having defined a clean and noise-perturbed network-pass, we can measure a loss differential for a NP-based update. Supposing that the loss \(\mathcal{L}\) is measured using the network outputs, the loss difference between the clean and noisy network is given by \[\delta\mathcal{L}=\mathcal{L}(\mathbf{\tilde{x}}_{L})-\mathcal{L}(\mathbf{x}_ {L})\,,\] where \(\delta\mathcal{L}\) is a scalar measure of the difference in loss induced by the addition of noise to the network. Given this loss difference and the network's perturbed and unperturbed outputs, we compute a layer-wise learning signal: \[\Delta\mathbf{W}_{l}^{\text{NP}}=-\frac{\delta\mathcal{L}}{\sigma}\mathbf{ \epsilon}_{l}\mathbf{x}_{l-1}^{\top}\,. \tag{3}\] #### 2.1.2 Iterative node perturbation Though mathematically simple, the traditional NP approach described above has a number of drawbacks. One of the biggest drawbacks is that all layers are updated simultaneously and each layer's noise impact is confounded by additional noise in previous and following layers. Appendix A describes how correlations arise between layers. This creates a mismatch between traditional NP and a theoretically grounded approach for derivative measurement. In the following, we develop a more principled approach to node perturbation. Our goal is to approximate the gradient of the loss with respect to the pre-activations in a layer \(l\) without needing to propagate gradients. To this end, we consider the partial derivative of the loss with respect to the pre-activation \(a_{l}^{i}\) of unit \(i\) in layer \(l\) for all \(i\). We define a perturbed state as \[\mathbf{\tilde{x}}_{k}(h)=f\left(\mathbf{\tilde{a}}_{k}+h\mathbf{m}_{k}\right)\] with \(h\) an arbitrary scalar and binary vectors \(\mathbf{m}_{k}=\mathbf{e}_{i}\) if \(k=l\) with \(\mathbf{e}_{i}\) a standard unit vector and \(\mathbf{m}_{k}=\mathbf{0}\) otherwise. We may now define the partial derivatives as \[\left(\mathbf{g}_{l}\right)_{i}=\lim_{h\to 0}\frac{\mathcal{L}(\mathbf{ \tilde{x}}_{L}(h))-\mathcal{L}(\mathbf{x}_{L})}{h}\,.\] This suggests that node perturbation can be rigorously implemented by measuring derivatives using perturbations \(h\mathbf{m}_{k}\) for all units \(i\) individually in each layer \(l\). However, this would require as many forward-passes as there exist nodes in the network, which would be extremely inefficient. An alternative approach is to define perturbations in terms of directional derivatives. Directional derivatives measure the derivative of a function based upon an arbitrary vector direction in its dependent variables. However, this can only be accomplished for a set of dependent variables which are individually independent of one-another. Thus, we cannot compute such a measure across an entire network. We can, however, measure the directional derivative with respect to a specific layer via a perturbation given by \[\mathbf{\tilde{x}}_{k}(h)=f\left(\mathbf{\tilde{a}}_{k}+h\mathbf{v}_{k}\right)\,,\] where \(\mathbf{v}_{k}\sim\mathcal{N}(0,\mathbf{I}_{k})\) if \(k=l\) and \(\mathbf{v}_{k}=\mathbf{0}\) otherwise. Given this definition, we can directly measure a directional derivative of our deep neural network for layer \(l\) in vector direction \(\mathbf{v}=(\mathbf{v}_{1},\dots,\mathbf{v}_{L})\) as \[\nabla_{\mathbf{v}}\mathcal{L}=\lim_{h\to 0}\frac{\mathcal{L}\left( \mathbf{\tilde{x}}_{L}(h)\right)-\mathcal{L}\left(\mathbf{x}_{L}\right)}{h|| \mathbf{v}||_{F}}\,.\] By sampling the noise vector repeatedly, we arrive at the gradient of the loss with respect to a specific layer: \[\mathbf{g}_{l}\approx\sqrt{N_{l}}\langle\nabla_{\mathbf{v}}\mathcal{L} \mathbf{v}\rangle_{\mathbf{v}}\,,\] which normalizes the length of the measured gradient vector based upon the projection length of the unit-vectors sampled for the directional derivative onto the parameter axes. This description allows us to accurately measure the gradient of a particular layer of a deep neural network by perturbation. This can remain rather inefficient, given that the number of perturbation measures is now proportional to the number of layers in the network, however, it is much more efficient than individual node perturbations. We refer to this method, with weight update \[\Delta\mathbf{W}_{l}^{\text{INP}}=-\sqrt{N_{l}}\;\langle\nabla_{\mathbf{v}} \mathcal{L}\mathbf{v}\rangle_{\mathbf{v}}\;\mathbf{x}_{l-1}^{\top} \tag{4}\] as iterative node perturbation (INP). #### 2.1.3 Activity-based node perturbation Though theoretically well grounded, the iterative update method described above incurs significant computational overhead compared to the direct noise-based update method. Specifically, in order to precisely compute directional derivatives, noise must be applied in a purely layer-wise fashion, requiring a high degree of control over the noise injection. This results in a less biologically plausible and functionally more challenging implementation with less potential for scaling. Stepping beyond this method, we attempt to balance learning speed and computational cost by approximating this directional derivative across the whole network simultaneously. This involves assuming that all node activations in our network are independent and treating the entire network as if it were a single layer. This can be achieved by, instead of measuring and tracking the injected noise alone, measuring the state difference between the clean and noisy forward passes of the whole network. Concretely, taking the definition of the forward pass given by NP in Eq. 2, we define \[\Delta\mathbf{W}_{l}^{\text{ANP}}=-\sqrt{N}\;\delta\mathcal{L}\;\frac{\delta \mathbf{a}_{l}}{||\delta\mathbf{a}||_{F}}\;\mathbf{x}_{l-1}^{\top}\,, \tag{5}\] where \(N=\sum_{l=0}^{L}N_{l}\) is the total number of units in the network, \(\delta\mathbf{a}_{l}=\mathbf{\widetilde{a}}_{l}-\mathbf{a}_{l}\) is the activity difference between a noisy and clean pass in layer \(l\) and \(\delta\mathbf{a}=(\delta\mathbf{a}_{1},\dots,\delta\mathbf{a}_{L})\) is the concatenation of all activity differences. We refer to this update rule as activity-based node perturbation (ANP). Here, we have now updated multiple aspects of the NP rule. First, rather than using the measure of noise injected at each layer, we instead measure the total change in activation between the clean and noisy networks. This aligns our measure more closely with the directional derivative: we truly measure how the network changed in its response rather than ignoring the impact of all previous layers and their noise upon the current layer. This direct use of the activity difference also requires a recomputation of the scale of the perturbation vector. Here we carry the rescaling out by a normalization based upon the activity-difference length and then upscale the signal of this perturbation based upon the network size. Note that our directional derivative measuring process remains theoretically well grounded, but is now potentially biased in terms of the vector directions it measures derivatives in. This is due to biasing induced by noise propagated from previous layers with a correlated impact upon activity differences. ### Increasing NP efficiency through decorrelation Uncorrelated data variables have been proposed and demonstrated as impactful in making credit assignment more efficient in deep neural networks (LeCun et al., 2002). If a layer's inputs, \(\mathbf{x}_{l}\), have highly correlated features, a change in one feature can be associated with a change in another correlated feature, making it difficult for the network to disentangle the contributions of each feature to the loss. This can lead to less efficient learning, as has been described in previous research in the context of BP (Luo, 2017; Wadia et al., 2021). NP additionally benefits from decorrelation of input variables at every layer. Specifically, Hiratani et al. (2022) demonstrate that the covariance of NP updates between layers \(k\) and \(l\) can be described as \[C_{kl}^{\text{np}}\approx 2C_{kl}^{\text{sogd}}+\delta_{kl}\left\langle\sum_{m=1} ^{k}\|\mathbf{g}_{m}\|^{2}\mathbf{I}_{k}\otimes\mathbf{x}_{k-1}\mathbf{x}_{k-1 }^{T}\right\rangle_{\mathbf{x}}\,,\] where \(C_{kl}^{\mathrm{sgd}}\) is the covariance of SGD updates, \(\delta_{kl}\) is the Kronecker delta and \(\otimes\) is a tensor product. The above equation implies that in NP the update covariance is twice that of the SGD updates plus an additional term that depends on the correlations in the input data, \(\mathbf{x}_{k-1}\mathbf{x}_{k-1}^{T}\). Removing correlations from the input data should therefore reduce the bias in the NP algorithm updates, possibly leading to better performance. In this work, we introduce decorrelated node perturbation, in which we decorrelate each layer's input activities using a trainable decorrelation procedure first described by Ahmad et al. (2023). A layer input \(\mathbf{x}_{l}\) is decorrelated by multiplication by a decorrelation matrix \(\mathbf{R}_{l}\) to yield a decorrelated input \(\mathbf{x}_{l}^{\star}=\mathbf{R}_{l}\mathbf{x}_{l}\). The decorrelation matrix \(\mathbf{R}_{l}\) is then updated according to \[\mathbf{R}_{l}\leftarrow\mathbf{R}_{l}-\eta_{R}\left(\mathbf{x}_{l}^{\star} \left(\mathbf{x}_{l}^{\star}\right)^{\top}-\text{diag}\left((\mathbf{x}_{l}^{ \star})^{2}\right)\right)\mathbf{R}_{l}\,,\] where \(\eta_{R}\) is a small constant learning rate and \(\mathbf{R}_{l}\) is initialised as the identity matrix. For a full derivation of this procedure see Ahmad et al. (2023). Decorrelation can be combined with any of the formulations described in the previous paragraphs, which we refer to as DNP, DINP and DANP for regular, iterative and activity-based node perturbation, respectively. In Appendix B, Algorithm 1, we describe decorrelation using the activity-based update procedure. In Figure 1 we illustrate the operations carried out to compute updates for some of the methods explained above. ### Experimental validation To measure the performance of the algorithms proposed, we ran a set of experiments with the CIFAR-10 dataset (Krizhevsky, 2009), specifically aiming to quantify the performance differences between the traditional (NP), layer-wise iterative (INP) and activity-based (ANP) formulations of NP. These experiments were run using a three-hidden-layer fully-connected neural network with leaky ReLU activation functions and a mean squared error (MSE) loss on the one-hot encoding of class membership. Experiments were repeated using five different random seeds, after which performance statistics were averaged. The learning rates were determined for each algorithm separately by a grid search, in which the learning rate started at \(10^{-6}\) and was doubled until performance stopped increasing. See Appendix C for the learning rates used in the experiments. When using NP methods, activity perturbations were drawn from a univariate Gaussian with variance \(\sigma^{2}=10^{-6}\). Figure 1: A graphical illustration of the computations required for update measurement for backpropagation (left), node perturbation (middle), and decorrelated activity-based node perturbation (right). Another experiment was run with CIFAR-10 to determine the performance effects of decorrelation. This was done in both a single-layer and a three-hidden-layer network, so that the scaling properties of the algorithms could be studied. Decorrelated BP (DBP) was also added as baselines for this and subsequent simulations. The specific architecture constructions are described in Appendix D. Additionally, two scaling experiments were run. One using CIFAR-10 with 3, 6, or 9 fully-connected hidden layers and another one using CIFAR-100 and a convolutional architecture with 3 convolutional layers, followed by a fully-connected hidden layer and an output layer. The purpose of these experiments was to assess whether the algorithms scale to deeper architectures and higher output dimensions. The latter has been shown to be a specific issue for the traditional NP algorithm (Hiratani et al., 2022). Finally, a version of the DANP algorithm was run where two noisy forward passes were used, instead of a clean and a noisy pass. This experiment specifically investigates whether DANP might work in inherently noisy systems. ## 3 Results ### Comparing NP formulations In single-layer networks, the three described NP formulations (NP, INP and ANP) converge to an equivalent learning rule. Therefore, multi-layer networks with three hidden layers were used to investigate performance differences across our proposed formulations. Figure 2 shows how weight updates of the different node perturbation methods compare to those from BP in a three-hidden-layer, fully-connected, feedforward networks for the CIFAR-10 classification task. When measuring the angles of comparison between the various methods, we can observe that the INP method is by far the most well-aligned in its updates with respect to backpropagation, closely followed by ANP. These results align with the theory laid out in the methods section of this work. Note that for all of these algorithms, alignment with BP updates improve when updates are averaged over more samples of noise. Therefore, it is the ranking of the angles between the NP algorithms that is of interest here, not the absolute value of the angle itself, as all angles would improve with more noise samples. ### Impact of decorrelation To assess the impact of decorrelation on NP's performance, we studied a single-layer network trained with NP, DNP, BP and DBP. Note that the different formulations of NP are identical in a single-layer networks where there is no impact on activities in the layer from past layers. Figure 3 shows that, when training single-layer networks, NP holds up relatively well compared to BP. This is likely due to the limited number of output nodes that need to be perturbed, making NP a decent approximation of BP in this regime. By comparison, DNP converges _faster_ than both BP and NP, despite using only a global reward signal. DNP's benefit is less pronounced in test accuracy Figure 2: Angles of various NP method updates with respect to BP updates, collected by averaging over 100 different noise samples during training. compared to train accuracy. As in (Ahmad et al., 2023), the benefit of decorrelation for learning was also observed in decorrelated backpropagation (DBP). It appears that part of the benefit of decorrelation is due to a lack of correlation in the input unit features and a corresponding ease in credit assignment without confound. An additional benefit from decorrelation, that is specific to NP-style updates, is explained by the way in which decorrelation reduces the covariance of NP weight updates, as described in the methods section above. ### Neural network performance comparisons We proceed by determining the performance of the different algorithms in multi-layer neural networks. See Appendix E for peak accuracies. Figure 4, shows the various network performance measures when training a multi-layer fully connected network. Though the measured angles during training differ between these algorithms, we can see that this does not appear to have a significant effect in distinguishing the ANP and NP methods. INP does show a performance benefit relative to the other two formulations. Also, all versions of DNP are competitive with BP in terms of train accuracy, with DINP even outperforming BP. For test accuracy, BP remains the best algorithm and DBP performs by far the best on the train set. Note that DNP performs much better in a three-hidden-layer network than in the single-layer networks explored in Figure 3, meaning that DNP does facilitate multi-layer credit assignment much more than regular NP. Figure 5 compares the performance of DANP and DINP for 3, 6 and 9 hidden layers. Performance for both algorithms is robust across network depth, with DINP outperforming DANP at every network depth. Though increasing the number of hidden layers beyond three did not boost performance, it also did not make the algorithms unstable, showing potential for the use of DNP in deeper networks. This is an area for future research. To further investigate the scaling and generalization capacities of our approach, we also trained and tested convolutional architectures for image classification. Figure 6 shows that, when training a convolutional neural network on CIFAR-100, DANP massively outperforms ANP in both train and Figure 4: Performance of different NP and BP formulations on CIFAR-10 when training fully-connected three-hidden-layer architectures. Figure 3: Performance of node perturbation and backpropagation with and without decorrelation on CIFAR-10 when training fully-connected single-layer architecture. Note that all NP methods have an equivalent formulation in a single-layer network. test accuracy, with DINP performing even better. Both DNP formulations outperform BP on the train set and DINP also outperforms BP on the test set. DBP also outperforms BP, showing the best test performance of all algorithms and catching up with DANP, but not DINP, late in training in terms of train performance. Note that performance of all algorithms is quite low compared to CIFAR-100 benchmarks as we are using a relatively shallow convolutional neural network in combination with MSE loss, which is not the standard for classification problems with a large output space. The purpose of this experiment was not to attain competitive classification performance per-se, but a comparative study of BP and DNP under a simple training regime. These results also illustrate that the application of such perturbation-based methods can extend trivially beyond fully-connected architectures. ### Node perturbation for noisy systems One of the most interesting applications of perturbation-based methods for learning are for systems which cannot operate without the possibility of noise. This includes both biological nervous systems as well as a range of neuromorphic hardware architectures. In order to demonstrate that this method is also applicable to architectures with embedded noise, we train networks in which there is no clean network pass available. Instead, two noisy network passes are computed and one is taken as if it were the clean pass. That is, we use \(\delta\mathbf{a}_{l}=\tilde{\mathbf{a}}_{l}^{(1)}-\tilde{\mathbf{a}}_{l}^{(2)}\) in Eq. 5. This is similar in spirit to the approach suggested by Cho et al. (2011). In this case we specifically compare and apply (decorrelated) activity-based node perturbation. This method does not assume that the learning algorithm can independently measure noise. Instead it can only measure the present, and potentially noisy, activity and thereafter measure activity differences to direct learning. As can be seen in Figure 7, computing updates based upon a set of two noisy passes, rather than a clean and noisy pass, produces extremely similar learning dynamics with some minimal loss in performance. The similarity of the speed of learning and performance levels suggests that clean network passes may provide little additional benefit for the computation of updates in this regime. Figure 5: Performance of DANP and DINP for 3-, 6- and 9-hidden-layer networks. Figure 6: Performance of different node perturbation and backpropagation variants when training a convolutional neural network on CIFAR-100. ## 4 Discussion In this work, we explored several formulations of NP learning and also introduced a layer-wise decorrelation method which significantly outperforms the baseline implementations. We attribute this increased efficacy to a more efficient credit assignment by virtue of decorrelated input features at every layer, as well as an attenuation of the bias in NP's updates caused by their covariance. In our results we see robust speedups in training across architectures. This speedup is sufficient to suggest that such alternative formulations of NP could prove sufficiently fast as to be competitive with traditional BP in certain contexts. The inclusion of decorrelation does appear to have an impact on the level of overfitting in these systems, with our NP-trained networks having a generally lower test accuracy. This fits with recent research which suggests an overfitting impact of data whitening (Wadia et al., 2021). In this respect, further investigation is warranted. Exploring more efficient forms of noise-based learning is interesting beyond credit-assignment alone. First, this form of learning is more biologically plausible as it does not require weight transport of any kind, or even any specific feedback connectivity or computations. There is ample evidence for noise in biological neural networks and we suggest here that this could be effectively used for learning. Decorrelation in the brain is less well evidenced, however various mechanisms act to reduce correlation or induce whitening - especially in early visual cortical areas (King et al., 2013). Additionally, lateral inhibition, which is known to occur in the brain, can be interpreted as a way to reduce redundancy in input signals akin to decorrelation, making outputs of neurons less similar to each other (Bekesy, 1967). In addition, noise-based learning approaches might be more efficiently implemented in hardware. First, as demonstrated in Figure 7, our proposed DANP algorithm scales well even when there is no access to a 'clean' model pass. This means that such an approach could be ideally suited for application in noisy inference hardware. Even on traditional hardware architectures, forward passes are often easier to optimize than backward passes and are often significantly faster to compute. This can be especially true for neuromorphic computing approaches, where backward passes require automatic differentiation implementations and a separate computational pipeline for backward passes. In both of these cases, a noise-based approach to learning could prove highly efficient. Though the presented methods for effective noise-based learning show a great deal of promise, there are a number of additional research steps to be taken. The architectures considered are relatively shallow, and thus an investigation into how well this approach scales for very deep networks would be beneficial. Testing the scalability of these approaches to tasks of greater complexity is also crucial, as are their application to other network architectures such as residual networks and transformers. In general, our work opens up exciting opportunities since it has the potential to bring gradient-free training of deep neural networks within reach. That is, in addition to not requiring a backward pass, efficient noise-based learning may also lend itself to networks not easily trained by backpropagation, such as those consisting of activation functions with jumps, binary networks or networks in which the computational graph is broken, as in reinforcement learning. Figure 7: Performance of DANP when applied with one clean network and one noisy network, vs with two noisy network passes. These curves are for a three-layer fully-connected network trained for CIFAR-10 classification (comparable to Figure 4). ## Acknowledgements This publication is also part of the project Dutch Brain Interface Initiative (DBI2) with project number 024.005.022 of the research programme Gravitation which is (partly) financed by the Dutch Research Council (NWO).
2305.02334
Structures of Neural Network Effective Theories
We develop a diagrammatic approach to effective field theories (EFTs) corresponding to deep neural networks at initialization, which dramatically simplifies computations of finite-width corrections to neuron statistics. The structures of EFT calculations make it transparent that a single condition governs criticality of all connected correlators of neuron preactivations. Understanding of such EFTs may facilitate progress in both deep learning and field theory simulations.
Ian Banta, Tianji Cai, Nathaniel Craig, Zhengkang Zhang
2023-05-03T18:00:00Z
http://arxiv.org/abs/2305.02334v1
# Structures of Neural Network Effective Theories ###### Abstract We develop a diagrammatic approach to effective field theories (EFTs) corresponding to deep neural networks at initialization, which dramatically simplifies computations of finite-width corrections to neuron statistics. The structures of EFT calculations make it transparent that a single condition governs criticality of all connected correlators of neuron preactivations. Understanding of such EFTs may facilitate progress in both deep learning and field theory simulations. _Introduction --_ Machine learning (ML) has undergone a revolution in recent years, with applications ranging from image recognition and natural language processing, to self-driving cars and playing Go. Central to all these developments is the engineering of deep neural networks, a class of ML architectures consisting of multiple layers of artificial neurons. Such networks are apparently rather complex, with a deterring number of trainable parameters, which means practical applications have often been guided by expensive trial and error. Nevertheless, extensive research is underway toward opening the black box. That a theoretical understanding of such complex systems is possible has to do with the observation that a wide range of neural network architectures actually admit a simple limit: they reduce to Gaussian processes when the network width (number of neurons per layer) goes to infinity [1; 2; 3; 4; 5; 6], and evolve under gradient-based training as linear models governed by the neural tangent kernel [7; 8; 9]. However, an infinitely-wide network neither exists in practice, nor provides an accurate model for deep learning. It is therefore crucial to understand finite-width effects, which have recently been studied by a variety of methods [10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21; 22; 23]. This line of research in ML theory has an intriguing synergy with theoretical physics [24]. In particular, it has been realized that neural networks have a natural correspondence with (statistical or quantum) field theories [25; 26; 27; 28; 29; 30; 31; 32; 33; 34]. Infinite-width networks--which are Gaussian processes--correspond to free theories, while finite-width corrections in wide networks can be calculated perturbatively as in weakly-interacting theories. This allows for a systematically-improvable characterization of neural networks beyond the (very few) exactly-solvable special cases [35; 36; 37]. Meanwhile, from an effective theory perspective [21], information propagation through a deep neural network can be understood as a renormalization group (RG) flow. Examining scaling behaviors near RG fixed points reveals strategies to tune the network to criticality [38; 39; 40], which is crucial for mitigating the notorious exploding and vanishing gradient problems in practical applications. In the reverse direction, this synergy also points to new opportunities to study field theories with neural networks [33]. Inspired by recent progress, in this letter we further explore the structures of effective field theories (EFTs) corresponding to archetypical deep neural networks. To this end, we develop a novel diagrammatic formalism.1 Our approach largely builds on the frameworks of Refs. [21; 22], which enable systematic calculations of finite-width corrections. The diagrammatic formalism dramatically simplifies these calculations, as we demonstrate by concisely reproducing known results in the main text and presenting further examples with new results in the Supplemental Material. Interestingly, the structures of diagrams in the RG analysis suggest that neural network EFTs are of a quite special type, where a single condition governs the critical tuning of all neuron correlators. The study of these EFTs may lend new insights into both neural network properties and novel field-theoretic phenomena. Footnote 1: See also Refs. [13; 17; 18; 26; 27; 31; 32; 41] for Feynman diagram-inspired approaches to ML. _EFT of deep neural networks --_ The archetype of deep neural networks, the multilayer perceptron, can be defined by a collection of neurons whose values \(\phi_{i}^{(0)}\) (called preactivations) are determined by the following operations given an input \(\vec{x}\in\mathbb{R}^{n_{0}}\): \[\phi_{i}^{(1)}(\vec{x}) =\sum_{j=1}^{n_{0}}W_{ij}^{(1)}x_{j}+b_{i}^{(1)}\,,\] \[\phi_{i}^{(\ell)}(\vec{x}) =\sum_{j=1}^{n_{\ell-1}}W_{ij}^{(\ell)}\sigma\big{(}\phi_{j}^{( \ell-1)}(\vec{x})\big{)}+b_{i}^{(\ell)}\quad(\ell\geq 2)\,. \tag{1}\] Here superscripts in parentheses label layers, subscripts \(i,j\) label neurons within a layer (of which there are \(n_{\ell}\) at the \(\ell\) th layer), and \(\sigma(\phi)\) is the activation function (common choices include \(\tanh(\phi)\) or \(\text{ReLU}(\phi)\equiv\max(0,\phi)\)). The weights \(W_{ij}^{(\ell)}\) and biases \(b_{i}^{(\ell)}\) (\(\ell=1,\ldots,L\)) are the network parameters which are adjusted to minimize a loss function during training, such that the trained network can approximate the desired function. The basic idea of an EFT of deep neural networks is to consider an ensemble of networks, where at initialization, \(W_{ij}^{(\ell)}\) and \(b_{i}^{(\ell)}\) are drawn independently from zero-mean Gaussian distributions with variances \(C_{W}^{(\ell)}/n_{\ell-1}\) and \(C_{b}^{(\ell)}\), respectively. The statistics of this ensemble encode both the typical behavior of neural networks initialized in this manner and how a particular network may fluctuate away from typicality. In the field theory language, these are captured by a Euclidean action, \(\mathcal{S}[\phi]=-\log P(\phi)\), for all neuron preactivation fields \(\phi_{i}^{(\ell)}(\vec{x})\), where \(P(\phi)\) is the joint probability distribution. As we review in the Supplemental Material, at initialization the conditional probability distribution at each layer is Gaussian: \[P\big{(}\phi^{(\ell)}\big{|}\phi^{(\ell-1)}\big{)} =\big{[}\text{det}\big{(}2\pi\mathcal{G}^{(\ell)}\big{)}\big{]}^{ \frac{-\pi}{2}}\;e^{-\mathcal{S}_{0}^{(\ell)}}\,, \tag{2}\] \[\mathcal{S}_{0}^{(\ell)} =\!\int\!d\vec{x}_{1}d\vec{x}_{2}\,\frac{1}{2}\sum_{i=1}^{n_{ \ell}}\phi_{i}^{(\ell)}\!(\vec{x}_{1})\big{(}\mathcal{G}^{(\ell)}\big{)}^{\!- \!1}\!(\vec{x}_{1},\vec{x}_{2})\,\phi_{i}^{(\ell)}\!(\vec{x}_{2}), \tag{3}\] where \(\mathcal{G}^{(\ell)}(\vec{x}_{1},\vec{x}_{2})=\frac{1}{n_{\ell-1}}\sum_{j=1}^ {n_{\ell-1}}\mathcal{G}_{j}^{(\ell)}(\vec{x}_{1},\vec{x}_{2})\), with \[\mathcal{G}_{j}^{(\ell)}(\vec{x}_{1},\vec{x}_{2}) =C_{b}^{(\ell)}+C_{W}^{(\ell)}\,\sigma\big{(}\phi_{j}^{(\ell-1)}( \vec{x}_{1})\big{)}\,\sigma\big{(}\phi_{j}^{(\ell-1)}(\vec{x}_{2})\big{)}\] \[\equiv C_{b}^{(\ell)}+C_{W}^{(\ell)}\,\sigma_{j,\vec{x}_{1}}^{(\ell-1)} \,\sigma_{j,\vec{x}_{2}}^{(\ell-1)} \tag{4}\] for \(\ell\geq 2\), and \(\mathcal{G}_{j}^{(1)}(\vec{x}_{1},\vec{x}_{2})=C_{b}^{(1)}+C_{W}^{(1)}\,x_{1j }\,x_{2j}\). We have taken the continuum limit in input space to better parallel field theory analyses. \(\big{(}\mathcal{G}^{(\ell)}\big{)}^{\!-\!1}\) is understood as the pseudoinverse when \(\mathcal{G}^{(\ell)}\) is not invertible. We see that for \(\ell\geq 2\), \(\mathcal{G}^{(\ell)}(\vec{x}_{1},\vec{x}_{2})\) is an operator of the \((\ell-1)\)th-layer neurons, so Eq. (3) is actually an interacting theory with interlayer couplings. This also means the determinant in Eq. (2) is not a constant prefactor. To account for its effect, we introduce auxiliary anticommuting fields \(\psi\), \(\vec{\psi}\) which are analogs of ghosts and antighosts in the Faddeev-Popov procedure. Including all layers, we have \[e^{-\mathcal{S}[\phi]}=\int\!\mathcal{D}\psi\mathcal{D}\bar{\psi}\;e^{-\sum \limits_{i=1}^{L}\big{(}\mathcal{S}_{0}^{(\ell)}[\phi]+\mathcal{S}_{\psi}^{( \ell)}[\phi,\psi,\vec{\psi}]\big{)}}\,, \tag{5}\] where \(\mathcal{S}_{0}^{(\ell)}\) is given by Eq. (3) above and \[\mathcal{S}_{\psi}^{(\ell)}=-\!\int\!d\vec{x}_{1}d\vec{x}_{2}\sum_{i^{\prime}= 1}^{n_{\ell}/2}\bar{\psi}_{i^{\prime}}^{(\ell)}\!(\vec{x}_{1})\big{(}\mathcal{ G}^{(\ell)}\big{)}^{\!-\!1}\!(\vec{x}_{1},\vec{x}_{2})\,\psi_{i^{\prime}}^{(\ell)} \!(\vec{x}_{2})\,. \tag{6}\] The \(\ell\)th-layer neurons interact with the \((\ell-1)\)th-layer and \((\ell+1)\)th-layer neurons via \(\mathcal{S}_{0}^{(\ell)}\) and \(\mathcal{S}_{0}^{(\ell+1)}\), respectively, while their associated ghosts have opposite-sign couplings to the \((\ell-1)\)th-layer neurons but do not couple to \((\ell+1)\)th-layer neurons. This means \(\phi^{(\ell)}\) and \(\psi^{(\ell)}\) loops cancel as far as their couplings to \(\phi^{(\ell-1)}\) are concerned, which must be the case since the network has directionality--neurons at a given layer cannot be affected by what happens at deeper layers. _Neuron statistics from Feynman diagrams_ -- We are interested in calculating neuron statistics, _i.e._ connected correlators of neuron preactivation fields \(\phi_{i}^{(\ell)}(\vec{x})\) in the EFT above. More precisely, we would like to track the evolution of neuron correlators as a function of network layer \(\ell\), which encodes how information is processed through a deep neural network and has an analogous form to RG flows in field theory. To this end, we develop an efficient diagrammatic framework to recursively determine \(\ell\)th-layer neuron correlators in terms of \((\ell-1)\)th-layer neuron correlators. Starting from the action Eq. (3), we can derive the following Feynman rule (see Supplemental Material for details): \[\begin{array}{c}\frac{1}{n_{\ell-1}}\,\mathcal{G}_{j}^{(\ell)} \!(\vec{x}_{1},\vec{x}_{2})\\ \\ \end{array} = \tag{7}\] As indicated above, a blob means taking the expectation value of the operator (or product of operators) attached to it. Eq. (7) contains both what we normally call propagators and vertices: the first term on the right-hand side, when summed over \(j\), is the full propagator (or two-point correlator) for \(\phi_{i}^{(\ell)}\), \[\big{\langle}\phi_{i_{1}}^{(\ell)}\!(\vec{x}_{1})\,\phi_{i_{2}}^{(\ell)}\!( \vec{x}_{2})\big{\rangle}=\delta_{i_{1}i_{2}}\big{\langle}\mathcal{G}^{(\ell) }\!(\vec{x}_{1},\vec{x}_{2})\big{\rangle}\,, \tag{8}\] while the second term is an interaction vertex between \(\phi_{i}^{(\ell)}\) bilinears and operators built from \(\phi_{j}^{(\ell-1)}\): \[\Delta_{j}^{(\ell-1)}\!(\vec{x}_{1},\vec{x}_{2})\equiv\sigma_{j,\vec{x}_{1}}^{( \ell-1)}\,\sigma_{j,\vec{x}_{2}}^{(\ell-1)}-\big{\langle}\sigma_{j,\vec{x}_{1}}^{( \ell-1)}\,\sigma_{j,\vec{x}_{2}}^{(\ell-1)}\big{\rangle} \tag{9}\] for \(\ell\geq 2\), and \(\Delta_{j}^{(0)}\!(\vec{x}_{1},\vec{x}_{2})=0\). From Eq. (7) it is clear that each \(\phi^{2}\Delta\) vertex comes with a factor of \(\frac{1}{n}\) (where \(n\) collectively denotes \(n_{1},\cdots,n_{L-1}\)). In the infinite-width limit, \(n\to\infty\), the EFT is a free theory, whereas for large but finite \(n\), we have a weakly-interacting theory where higher-point connected correlators can be perturbatively calculated as a \(\frac{1}{n}\) expansion. To see how this works, let us first take a closer look at the two-point correlator Eq. (8) (which is automatically connected since we have normalized \(\int\mathcal{D}\phi\,e^{-\mathcal{S}}=1\), meaning the sum of vacuum bubbles vanishes). We can write it as an expansion in \(\frac{1}{n}\): \[\big{\langle}\mathcal{G}^{(\ell)}\!(\vec{x}_{1},\vec{x}_{2})\big{\rangle}=\sum_ {p=0}^{\infty}\frac{1}{n_{\ell-1}^{p}}\,\mathcal{K}_{p}^{(\ell)}\!(\vec{x}_{1}, \vec{x}_{2})\,. \tag{10}\] The leading-order (LO) term \(\mathcal{K}_{0}^{(\ell)}\) is known as the kernel; it is the propagator for \(\phi_{i}^{(\ell)}\) in the free-theory limit \(n\to\infty\). Evaluating \(\big{\langle}\mathcal{G}^{(\ell)}\!(\vec{x}_{1},\vec{x}_{2})\big{\rangle}\) in this limit amounts to using free-theory propagators \(\mathcal{K}_{0}^{(\ell-1)}\) for the previous layer neurons \(\phi_{j}^{(\ell-1)}\) in the blob in Eq. (7): \[\mathcal{K}_{0}^{(\ell)}(\vec{x}_{1},\vec{x}_{2}) =\sum_{j}\frac{1}{n_{\ell-1}}\big{\langle}\mathcal{G}_{j}^{(\ell)} (\vec{x}_{1},\vec{x}_{2})\big{\rangle}_{\mathcal{K}_{0}^{(\ell-1)}}\] \[=C_{b}^{(\ell)}+C_{W}^{(\ell)}\big{\langle}\sigma_{\vec{x}_{1}} \sigma_{\vec{x}_{2}}\big{\rangle}_{\mathcal{K}_{0}^{(\ell-1)}}\,. \tag{11}\] Here subscript \(\mathcal{K}_{0}^{(\ell-1)}\) means the expectation value is computed with the free-theory propagator \(\mathcal{K}_{0}^{(\ell-1)}\) (_cf._ Eq. (15) in the Supplemental Material). We have dropped both neuron and layer indices on \(\sigma\) because the \(\mathcal{K}_{0}^{(\ell-1)}\) subscript already indicates the layer, and the expectation value is identical for all neurons in that layer. One can further evaluate \(\big{\langle}\sigma_{\vec{x}_{1}}\sigma_{\vec{x}_{2}}\big{\rangle}_{\mathcal{ K}_{0}^{(\ell-1)}}\big{\rangle}_{\mathcal{K}_{0}^{(\ell-1)}}\) for specific choices of activation functions \(\sigma\), but we stay activation-agnostic for the present analysis. Eq. (11) allows us to recursively determine \(\mathcal{K}_{0}^{(\ell)}\) from \(\mathcal{K}_{0}^{(\ell-1)}\), and has been well-known from studies of infinite-width networks. It may also be viewed as the RG flow of \(\mathcal{K}_{0}\), with ultraviolet boundary condition \(\mathcal{K}_{0}^{(1)}(\vec{x}_{1},\vec{x}_{2})=C_{b}^{(1)}+\frac{C_{W}^{(1)} }{n_{0}}\,\vec{x}_{1}\!\cdot\!\vec{x}_{2}\). It is straightforward to extend the diagrammatic calculation to \(\mathcal{K}_{p\geq 1}\). We present a simple derivation of the RG flow of \(\mathcal{K}_{1}\) in the Supplemental Material. Next, consider the connected four-point correlator: \[\big{\langle}\phi_{i_{1}}^{(\ell)}(\vec{x}_{1})\,\phi_{i_{2}}^{( \ell)}(\vec{x}_{2})\,\phi_{i_{3}}^{(\ell)}(\vec{x}_{3})\,\phi_{i_{4}}^{(\ell)} (\vec{x}_{4})\big{\rangle}_{c}\] \[=\delta_{i_{1}i_{2}}\delta_{i_{3}i_{4}}\,\frac{1}{n_{\ell-1}}\,V_ {4}^{(\ell)}(\vec{x}_{1},\vec{x}_{2};\vec{x}_{3},\vec{x}_{4})+\text{perms.}, \tag{12}\] where \[\frac{1}{n_{\ell-1}}\,V_{4}^{(\ell)}(\vec{x}_{1},\vec{x}_{2};\vec{x}_{3},\vec {x}_{4})=\sum_{j_{1},j_{2}}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \ diagrams, each internal propagator connecting two correlators (blobs) is counted twice. To avoid double-counting we thus insert an _inverse_ propagator for each internal line in the diagram. This explains the factors of \(\left(\mathcal{K}_{0}^{(\ell-1)}\right)^{-1}\) in the first line of Eq. (15), which effectively amputate the connected four-point correlator (or equivalently the larger blobs in the diagram). The final expression in Eq. (15) is obtained by Wick contraction, which yields factors of \(\mathcal{K}_{0}^{(\ell-1)}\) that cancel \(\left(\mathcal{K}_{0}^{(\ell-1)}\right)^{-1}\). Adding up Eqs. (14) and (15) gives the final result for \(V_{4}^{(\ell)}\) in terms \(V_{4}^{(\ell-1)}\) and \(\mathcal{K}_{0}^{(\ell-1)}\), _i.e._ the RG flow of \(V_{4}\), which agrees with Refs. [14; 21]. Both equations are \(\mathcal{O}(\frac{1}{n})\), so \(V_{4}^{(\ell)}\) defined by Eq. (12) is \(\mathcal{O}(1)\). The diagrammatic calculation extends straightforwardly to higher-point connected correlators, and provides a concise framework to systematically analyze finite-width effects in deep neural networks. In the Supplemental Material we present new results for the connected six-point and eight-point correlators as further examples. The RG flow can also be formulated at the level of the EFT action. The idea is to consider a tower of EFTs, \(\mathcal{S}_{\text{eff}}^{(\ell)}\) (\(\ell=1,\ldots,L\)), obtained by integrating out the neurons and ghosts in all but the \(\ell\) th layer. They take the form: \[\mathcal{S}_{\text{eff}}^{(\ell)}=\int\!\!d\vec{x}_{1}d\vec{x}_{2 }\left(\mathcal{K}_{0}^{(\ell)}\!+\!\mu^{(\ell)}\right)^{\!-\!1}\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! sufficient to require the networks are well-behaved on average, but the scaling behavior of each network must be close to the average. At first sight, criticality seems to impose more constraints than the number of tunable hyperparameters, if we require power-law scaling of all higher-point correlators at arbitrary input points. However, as we will show, the structures of RG flow, manifest in the diagrammatic formulation, are such that tuning \(\mathcal{K}_{0}\) to criticality actually ensures power-law scaling of all higher-point connected correlators near the fixed point. Let us start with the two-point correlator. The asymptotic scaling behavior (exponential vs. power-law) can be inferred from the following question: upon an infinitesimal variation at the \((\ell-1)\) th layer, \[\big{\langle}\mathcal{G}^{(\ell\cdot 1)}(\vec{x}_{1},\vec{x}_{2})\big{\rangle} \to\big{\langle}\mathcal{G}^{(\ell\cdot 1)}(\vec{x}_{1},\vec{x}_{2})\big{\rangle}+ \delta\big{\langle}\mathcal{G}^{(\ell\cdot 1)}(\vec{x}_{1},\vec{x}_{2})\big{\rangle}\,, \tag{20}\] how does \(\big{\langle}\mathcal{G}^{(\ell)}(\vec{x}_{1},\vec{x}_{2})\big{\rangle}\) change? Diagrammatically, this can be calculated as follows: \[= \tag{21}\] where a blob labeled "\(\delta\)" denotes the variation of the (two-point) correlator. The result is \[\delta\big{\langle}\mathcal{G}^{(\ell)}(\vec{x}_{1},\vec{x}_{2}) \big{\rangle}=\int\!d\vec{y}_{1}d\vec{y}_{2}\,\chi^{(\ell)}(\vec{x}_{1},\vec{x }_{2};\vec{y}_{1},\vec{y}_{2})\] \[\delta\big{\langle}\mathcal{G}^{(\ell\cdot 1)}(\vec{y}_{1},\vec{y}_{2} )\big{\rangle}\,, \tag{22}\] or, equivalently: \[\frac{\delta\big{\langle}\mathcal{G}^{(\ell)}(\vec{x}_{1},\vec{x}_{2})\big{\rangle} }{\delta\big{\langle}\mathcal{G}^{(\ell\cdot 1)}(\vec{y}_{1},\vec{y}_{2}) \big{\rangle}}\,=\,\chi^{(\ell)}(\vec{x}_{1},\vec{x}_{2};\vec{y}_{1},\vec{y}_ {2})\,, \tag{23}\] where \[\chi^{(\ell)}(\vec{x}_{1},\vec{x}_{2};\vec{y}_{1},\vec{y}_{2})\] \[=\] \[= \frac{C_{W}^{(\ell)}}{2}\bigg{\langle}\frac{\delta^{2}\Delta( \vec{x}_{1},\vec{x}_{2})}{\delta\phi(\vec{y}_{1})\delta\phi(\vec{y}_{2})} \bigg{\rangle}_{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! where \(\chi_{\parallel}=\chi_{\perp}=1\). In the Supplemental Material, we show that, at least for the scale-invariant and \(\mathcal{K}^{\star}=0\) universality classes, this tuning actually implies a stronger condition is satisfied (at LO in \(\frac{1}{n}\)): \[\chi^{(\ell)}(\vec{x}_{1},\vec{x}_{2};\vec{y}_{1},\vec{y}_{2}) \Big{|}_{\mathcal{K}_{0}^{(\ell\cdot 1)}=\mathcal{K}^{\star}}\] \[=\frac{1}{2}\Big{[}\delta(\vec{x}_{1}-\vec{y}_{1})\,\delta(\vec{ x}_{2}-\vec{y}_{2})+\delta(\vec{x}_{1}-\vec{y}_{2})\,\delta(\vec{x}_{2}-\vec{y}_{1 })\Big{]}. \tag{31}\] Eq. (31) ensures perturbations around the fixed point stay constant through the layers, not just for the two-point correlator, \(\delta\big{\langle}\mathcal{G}^{(\ell)}(\vec{x}_{1},\vec{x}_{2})\big{\rangle}= \delta\big{\langle}\mathcal{G}^{(\ell\cdot 1)}(\vec{x}_{1},\vec{x}_{2})\big{\rangle}\), but for the entire tower of higher-point connected correlators, \(\frac{1}{n_{\ell-1}^{\ell}}\,\delta V_{2k}^{(\ell)}(\vec{x}_{1},\ldots,\vec{x} _{2k})=\frac{1}{n_{\ell-2}^{\ell}}\,\delta V_{2k}^{(\ell\cdot 1)}\big{(}\vec{x}_{1}, \ldots,\vec{x}_{2k}\big{)}\). This in turn implies that for all of them, RG flow toward the fixed point is power-law instead of exponential, once the single condition Eq. (31) is satisfied. The discussion above makes it transparent that power-law scaling of higher-point connected correlators at criticality (previously observed in Refs. [21; 22] up to eight-point level in the degenerate-input limit) has its roots in the structures of EFT interactions, as manifested by the common structure shared by the diagrams in Eqs. (21), (26) and (29). Summary and outlook--In this letter, we introduced a diagrammatic formalism that significantly simplifies perturbative calculations of finite-width effects in EFTs corresponding to archetypical deep neural networks. The concise reproduction of known results and derivation of new results highlights the efficiency of the diagrammatic approach, while the incorporation of ghosts vastly simplifies \(\frac{1}{n}\) counting in the EFT action. Our analysis also made transparent the structures of such EFTs which underlie the success of critical tuning in deep neural networks. In fact, a universal diagrammatic structure emerges in the RG analysis of all higher-point connected correlators of neuron preactivations, which means criticality (_i.e._ power-law as opposed to exponential scaling) of all the neuron statistics at initialization is governed by a single condition, Eq. (31). From the deep learning point of view, an obvious next step is to extend the diagrammatic formalism to incorporate gradient-based training and simplify perturbative calculations involving the neural tangent kernel [7; 8] and its differentials [11; 12; 13; 21]. From the fundamental physics point of view, we are hopeful that much more can be learned from the intimate connection between neural networks and field theories. Understanding the structures of EFTs corresponding to other neural network architectures (_e.g._ recurrent neural networks [32; 42] and transformers [43]) will allow us to gain further insights into this connection and potentially point to novel ML architecture designs for simulating field theories. Acknowledgments--We are particularly grateful to Sho Yaida for helpful conversations throughout the course of this work. We thank Hannah Day, Marat Freyttsis, Boris Hanin, Yonatan Kahn, and Anindita Maiti for useful discussions and comments on a preliminary draft, and Guy Gur-Ari for related discussions. Feynman diagrams in this work were drawn using tikz-feynman [44]. This work was supported in part by the U.S. Department of Energy under the grant DE-SC0011702. This work was performed in part at the Aspen Center for Physics, supported by the National Science Foundation under Grant No. NSF PHY-2210452, and the Kavli Institute for Theoretical Physics, supported by the National Science Foundation under Grant No. NSF PHY-1748958.